MIDP: An MDP-based intelligent big data processing scheme for vehicular edge computing
The number of Vehicle Equipment (VE) connected to the Internet is increasing, and these VEs generate tasks that contain large amounts of data. Processing these tasks requires a lot of computing resources. Therefore, it is a promising issue that offloading compute-intensive tasks from resource-limite...
Ausführliche Beschreibung
Autor*in: |
Liu, Shun [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022transfer abstract |
---|
Schlagwörter: |
---|
Umfang: |
17 |
---|
Übergeordnetes Werk: |
Enthalten in: 25. Functional Outcomes Following Orbital Preservation in Patients with Surgical management of Sinonasal Tumours - Al Asaadi, Zahra ELSEVIER, 2022, Amsterdam [u.a.] |
---|---|
Übergeordnetes Werk: |
volume:167 ; year:2022 ; pages:1-17 ; extent:17 |
Links: |
---|
DOI / URN: |
10.1016/j.jpdc.2022.04.013 |
---|
Katalog-ID: |
ELV057987092 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV057987092 | ||
003 | DE-627 | ||
005 | 20230626050058.0 | ||
007 | cr uuu---uuuuu | ||
008 | 220808s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.jpdc.2022.04.013 |2 doi | |
028 | 5 | 2 | |a /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001797.pica |
035 | |a (DE-627)ELV057987092 | ||
035 | |a (ELSEVIER)S0743-7315(22)00088-0 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 610 |q VZ |
084 | |a 44.96 |2 bkl | ||
100 | 1 | |a Liu, Shun |e verfasserin |4 aut | |
245 | 1 | 0 | |a MIDP: An MDP-based intelligent big data processing scheme for vehicular edge computing |
264 | 1 | |c 2022transfer abstract | |
300 | |a 17 | ||
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a nicht spezifiziert |b z |2 rdamedia | ||
338 | |a nicht spezifiziert |b zu |2 rdacarrier | ||
520 | |a The number of Vehicle Equipment (VE) connected to the Internet is increasing, and these VEs generate tasks that contain large amounts of data. Processing these tasks requires a lot of computing resources. Therefore, it is a promising issue that offloading compute-intensive tasks from resource-limited vehicles to Vehicular Edge Computing (VEC) servers, which involves big data transmission, processing and computation. In a network, multiple providers provide VEC servers. When a vehicle generates a task, our goal is to make an intelligent decision on whether and when to offload this task to VEC servers to minimize the task completion time and total big data processing time. When each vehicle passes VEC servers, the vehicle can decide to offload its task to the VEC server in the current communication range, or continue to drive until it reaches the next server's communication range. This issue can be considered as an asset selling problem. It is a challenging issue to make a smart decision for the vehicle with a location view because the vehicle is not sure when the next VEC server will be available and how much about the available computing capacity of the next VEC server. Firstly, this paper formulates the problem as a Markov Decision Process (MDP), defines and analyzes the state set, action set, reward model, and state transition probability distribution. Then it uses Asynchronous Advantage Actor-Critic (A3C) algorithm to solve this MDP problem, builds the various elements of the A3C algorithm, uses Actor (the strategy function) to generate two actions of the vehicle: offloading and moving without offloading. Thirdly, it uses Critic (the value function) to evaluate Actor's behavior, and guide Actor's actions in subsequent stages. The Actor starts from the initial state in the state space until it enters the termination state, forming a complete decision-making process. It minimizes the completion time of task offloading through learning thereby reducing the delay of big data processing. Compared to the Immediately Offload (IO) scheme and Expect Offload (EO) scheme, the MIDP scheme proposed in this paper reduces the average task offloading delay to 29.93% and 29.99%, close to the EO scheme in terms of task completion rate and up to 66.6% improvement compared to the IO scheme. | ||
520 | |a The number of Vehicle Equipment (VE) connected to the Internet is increasing, and these VEs generate tasks that contain large amounts of data. Processing these tasks requires a lot of computing resources. Therefore, it is a promising issue that offloading compute-intensive tasks from resource-limited vehicles to Vehicular Edge Computing (VEC) servers, which involves big data transmission, processing and computation. In a network, multiple providers provide VEC servers. When a vehicle generates a task, our goal is to make an intelligent decision on whether and when to offload this task to VEC servers to minimize the task completion time and total big data processing time. When each vehicle passes VEC servers, the vehicle can decide to offload its task to the VEC server in the current communication range, or continue to drive until it reaches the next server's communication range. This issue can be considered as an asset selling problem. It is a challenging issue to make a smart decision for the vehicle with a location view because the vehicle is not sure when the next VEC server will be available and how much about the available computing capacity of the next VEC server. Firstly, this paper formulates the problem as a Markov Decision Process (MDP), defines and analyzes the state set, action set, reward model, and state transition probability distribution. Then it uses Asynchronous Advantage Actor-Critic (A3C) algorithm to solve this MDP problem, builds the various elements of the A3C algorithm, uses Actor (the strategy function) to generate two actions of the vehicle: offloading and moving without offloading. Thirdly, it uses Critic (the value function) to evaluate Actor's behavior, and guide Actor's actions in subsequent stages. The Actor starts from the initial state in the state space until it enters the termination state, forming a complete decision-making process. It minimizes the completion time of task offloading through learning thereby reducing the delay of big data processing. Compared to the Immediately Offload (IO) scheme and Expect Offload (EO) scheme, the MIDP scheme proposed in this paper reduces the average task offloading delay to 29.93% and 29.99%, close to the EO scheme in terms of task completion rate and up to 66.6% improvement compared to the IO scheme. | ||
650 | 7 | |a Delay-aware |2 Elsevier | |
650 | 7 | |a Unmanned aerial vehicles |2 Elsevier | |
650 | 7 | |a Big data |2 Elsevier | |
650 | 7 | |a Task offload |2 Elsevier | |
650 | 7 | |a Energy efficient |2 Elsevier | |
700 | 1 | |a Yang, Qiang |4 oth | |
700 | 1 | |a Zhang, Shaobo |4 oth | |
700 | 1 | |a Wang, Tian |4 oth | |
700 | 1 | |a Xiong, Neal N. |4 oth | |
773 | 0 | 8 | |i Enthalten in |n Elsevier |a Al Asaadi, Zahra ELSEVIER |t 25. Functional Outcomes Following Orbital Preservation in Patients with Surgical management of Sinonasal Tumours |d 2022 |g Amsterdam [u.a.] |w (DE-627)ELV008974187 |
773 | 1 | 8 | |g volume:167 |g year:2022 |g pages:1-17 |g extent:17 |
856 | 4 | 0 | |u https://doi.org/10.1016/j.jpdc.2022.04.013 |3 Volltext |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SYSFLAG_U | ||
912 | |a SSG-OLC-PHA | ||
936 | b | k | |a 44.96 |j Zahnmedizin |q VZ |
951 | |a AR | ||
952 | |d 167 |j 2022 |h 1-17 |g 17 |
author_variant |
s l sl |
---|---|
matchkey_str |
liushunyangqiangzhangshaobowangtianxiong:2022----:ipndbsdnelgnbgaarcsigceeovh |
hierarchy_sort_str |
2022transfer abstract |
bklnumber |
44.96 |
publishDate |
2022 |
allfields |
10.1016/j.jpdc.2022.04.013 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001797.pica (DE-627)ELV057987092 (ELSEVIER)S0743-7315(22)00088-0 DE-627 ger DE-627 rakwb eng 610 VZ 44.96 bkl Liu, Shun verfasserin aut MIDP: An MDP-based intelligent big data processing scheme for vehicular edge computing 2022transfer abstract 17 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier The number of Vehicle Equipment (VE) connected to the Internet is increasing, and these VEs generate tasks that contain large amounts of data. Processing these tasks requires a lot of computing resources. Therefore, it is a promising issue that offloading compute-intensive tasks from resource-limited vehicles to Vehicular Edge Computing (VEC) servers, which involves big data transmission, processing and computation. In a network, multiple providers provide VEC servers. When a vehicle generates a task, our goal is to make an intelligent decision on whether and when to offload this task to VEC servers to minimize the task completion time and total big data processing time. When each vehicle passes VEC servers, the vehicle can decide to offload its task to the VEC server in the current communication range, or continue to drive until it reaches the next server's communication range. This issue can be considered as an asset selling problem. It is a challenging issue to make a smart decision for the vehicle with a location view because the vehicle is not sure when the next VEC server will be available and how much about the available computing capacity of the next VEC server. Firstly, this paper formulates the problem as a Markov Decision Process (MDP), defines and analyzes the state set, action set, reward model, and state transition probability distribution. Then it uses Asynchronous Advantage Actor-Critic (A3C) algorithm to solve this MDP problem, builds the various elements of the A3C algorithm, uses Actor (the strategy function) to generate two actions of the vehicle: offloading and moving without offloading. Thirdly, it uses Critic (the value function) to evaluate Actor's behavior, and guide Actor's actions in subsequent stages. The Actor starts from the initial state in the state space until it enters the termination state, forming a complete decision-making process. It minimizes the completion time of task offloading through learning thereby reducing the delay of big data processing. Compared to the Immediately Offload (IO) scheme and Expect Offload (EO) scheme, the MIDP scheme proposed in this paper reduces the average task offloading delay to 29.93% and 29.99%, close to the EO scheme in terms of task completion rate and up to 66.6% improvement compared to the IO scheme. The number of Vehicle Equipment (VE) connected to the Internet is increasing, and these VEs generate tasks that contain large amounts of data. Processing these tasks requires a lot of computing resources. Therefore, it is a promising issue that offloading compute-intensive tasks from resource-limited vehicles to Vehicular Edge Computing (VEC) servers, which involves big data transmission, processing and computation. In a network, multiple providers provide VEC servers. When a vehicle generates a task, our goal is to make an intelligent decision on whether and when to offload this task to VEC servers to minimize the task completion time and total big data processing time. When each vehicle passes VEC servers, the vehicle can decide to offload its task to the VEC server in the current communication range, or continue to drive until it reaches the next server's communication range. This issue can be considered as an asset selling problem. It is a challenging issue to make a smart decision for the vehicle with a location view because the vehicle is not sure when the next VEC server will be available and how much about the available computing capacity of the next VEC server. Firstly, this paper formulates the problem as a Markov Decision Process (MDP), defines and analyzes the state set, action set, reward model, and state transition probability distribution. Then it uses Asynchronous Advantage Actor-Critic (A3C) algorithm to solve this MDP problem, builds the various elements of the A3C algorithm, uses Actor (the strategy function) to generate two actions of the vehicle: offloading and moving without offloading. Thirdly, it uses Critic (the value function) to evaluate Actor's behavior, and guide Actor's actions in subsequent stages. The Actor starts from the initial state in the state space until it enters the termination state, forming a complete decision-making process. It minimizes the completion time of task offloading through learning thereby reducing the delay of big data processing. Compared to the Immediately Offload (IO) scheme and Expect Offload (EO) scheme, the MIDP scheme proposed in this paper reduces the average task offloading delay to 29.93% and 29.99%, close to the EO scheme in terms of task completion rate and up to 66.6% improvement compared to the IO scheme. Delay-aware Elsevier Unmanned aerial vehicles Elsevier Big data Elsevier Task offload Elsevier Energy efficient Elsevier Yang, Qiang oth Zhang, Shaobo oth Wang, Tian oth Xiong, Neal N. oth Enthalten in Elsevier Al Asaadi, Zahra ELSEVIER 25. Functional Outcomes Following Orbital Preservation in Patients with Surgical management of Sinonasal Tumours 2022 Amsterdam [u.a.] (DE-627)ELV008974187 volume:167 year:2022 pages:1-17 extent:17 https://doi.org/10.1016/j.jpdc.2022.04.013 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA 44.96 Zahnmedizin VZ AR 167 2022 1-17 17 |
spelling |
10.1016/j.jpdc.2022.04.013 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001797.pica (DE-627)ELV057987092 (ELSEVIER)S0743-7315(22)00088-0 DE-627 ger DE-627 rakwb eng 610 VZ 44.96 bkl Liu, Shun verfasserin aut MIDP: An MDP-based intelligent big data processing scheme for vehicular edge computing 2022transfer abstract 17 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier The number of Vehicle Equipment (VE) connected to the Internet is increasing, and these VEs generate tasks that contain large amounts of data. Processing these tasks requires a lot of computing resources. Therefore, it is a promising issue that offloading compute-intensive tasks from resource-limited vehicles to Vehicular Edge Computing (VEC) servers, which involves big data transmission, processing and computation. In a network, multiple providers provide VEC servers. When a vehicle generates a task, our goal is to make an intelligent decision on whether and when to offload this task to VEC servers to minimize the task completion time and total big data processing time. When each vehicle passes VEC servers, the vehicle can decide to offload its task to the VEC server in the current communication range, or continue to drive until it reaches the next server's communication range. This issue can be considered as an asset selling problem. It is a challenging issue to make a smart decision for the vehicle with a location view because the vehicle is not sure when the next VEC server will be available and how much about the available computing capacity of the next VEC server. Firstly, this paper formulates the problem as a Markov Decision Process (MDP), defines and analyzes the state set, action set, reward model, and state transition probability distribution. Then it uses Asynchronous Advantage Actor-Critic (A3C) algorithm to solve this MDP problem, builds the various elements of the A3C algorithm, uses Actor (the strategy function) to generate two actions of the vehicle: offloading and moving without offloading. Thirdly, it uses Critic (the value function) to evaluate Actor's behavior, and guide Actor's actions in subsequent stages. The Actor starts from the initial state in the state space until it enters the termination state, forming a complete decision-making process. It minimizes the completion time of task offloading through learning thereby reducing the delay of big data processing. Compared to the Immediately Offload (IO) scheme and Expect Offload (EO) scheme, the MIDP scheme proposed in this paper reduces the average task offloading delay to 29.93% and 29.99%, close to the EO scheme in terms of task completion rate and up to 66.6% improvement compared to the IO scheme. The number of Vehicle Equipment (VE) connected to the Internet is increasing, and these VEs generate tasks that contain large amounts of data. Processing these tasks requires a lot of computing resources. Therefore, it is a promising issue that offloading compute-intensive tasks from resource-limited vehicles to Vehicular Edge Computing (VEC) servers, which involves big data transmission, processing and computation. In a network, multiple providers provide VEC servers. When a vehicle generates a task, our goal is to make an intelligent decision on whether and when to offload this task to VEC servers to minimize the task completion time and total big data processing time. When each vehicle passes VEC servers, the vehicle can decide to offload its task to the VEC server in the current communication range, or continue to drive until it reaches the next server's communication range. This issue can be considered as an asset selling problem. It is a challenging issue to make a smart decision for the vehicle with a location view because the vehicle is not sure when the next VEC server will be available and how much about the available computing capacity of the next VEC server. Firstly, this paper formulates the problem as a Markov Decision Process (MDP), defines and analyzes the state set, action set, reward model, and state transition probability distribution. Then it uses Asynchronous Advantage Actor-Critic (A3C) algorithm to solve this MDP problem, builds the various elements of the A3C algorithm, uses Actor (the strategy function) to generate two actions of the vehicle: offloading and moving without offloading. Thirdly, it uses Critic (the value function) to evaluate Actor's behavior, and guide Actor's actions in subsequent stages. The Actor starts from the initial state in the state space until it enters the termination state, forming a complete decision-making process. It minimizes the completion time of task offloading through learning thereby reducing the delay of big data processing. Compared to the Immediately Offload (IO) scheme and Expect Offload (EO) scheme, the MIDP scheme proposed in this paper reduces the average task offloading delay to 29.93% and 29.99%, close to the EO scheme in terms of task completion rate and up to 66.6% improvement compared to the IO scheme. Delay-aware Elsevier Unmanned aerial vehicles Elsevier Big data Elsevier Task offload Elsevier Energy efficient Elsevier Yang, Qiang oth Zhang, Shaobo oth Wang, Tian oth Xiong, Neal N. oth Enthalten in Elsevier Al Asaadi, Zahra ELSEVIER 25. Functional Outcomes Following Orbital Preservation in Patients with Surgical management of Sinonasal Tumours 2022 Amsterdam [u.a.] (DE-627)ELV008974187 volume:167 year:2022 pages:1-17 extent:17 https://doi.org/10.1016/j.jpdc.2022.04.013 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA 44.96 Zahnmedizin VZ AR 167 2022 1-17 17 |
allfields_unstemmed |
10.1016/j.jpdc.2022.04.013 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001797.pica (DE-627)ELV057987092 (ELSEVIER)S0743-7315(22)00088-0 DE-627 ger DE-627 rakwb eng 610 VZ 44.96 bkl Liu, Shun verfasserin aut MIDP: An MDP-based intelligent big data processing scheme for vehicular edge computing 2022transfer abstract 17 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier The number of Vehicle Equipment (VE) connected to the Internet is increasing, and these VEs generate tasks that contain large amounts of data. Processing these tasks requires a lot of computing resources. Therefore, it is a promising issue that offloading compute-intensive tasks from resource-limited vehicles to Vehicular Edge Computing (VEC) servers, which involves big data transmission, processing and computation. In a network, multiple providers provide VEC servers. When a vehicle generates a task, our goal is to make an intelligent decision on whether and when to offload this task to VEC servers to minimize the task completion time and total big data processing time. When each vehicle passes VEC servers, the vehicle can decide to offload its task to the VEC server in the current communication range, or continue to drive until it reaches the next server's communication range. This issue can be considered as an asset selling problem. It is a challenging issue to make a smart decision for the vehicle with a location view because the vehicle is not sure when the next VEC server will be available and how much about the available computing capacity of the next VEC server. Firstly, this paper formulates the problem as a Markov Decision Process (MDP), defines and analyzes the state set, action set, reward model, and state transition probability distribution. Then it uses Asynchronous Advantage Actor-Critic (A3C) algorithm to solve this MDP problem, builds the various elements of the A3C algorithm, uses Actor (the strategy function) to generate two actions of the vehicle: offloading and moving without offloading. Thirdly, it uses Critic (the value function) to evaluate Actor's behavior, and guide Actor's actions in subsequent stages. The Actor starts from the initial state in the state space until it enters the termination state, forming a complete decision-making process. It minimizes the completion time of task offloading through learning thereby reducing the delay of big data processing. Compared to the Immediately Offload (IO) scheme and Expect Offload (EO) scheme, the MIDP scheme proposed in this paper reduces the average task offloading delay to 29.93% and 29.99%, close to the EO scheme in terms of task completion rate and up to 66.6% improvement compared to the IO scheme. The number of Vehicle Equipment (VE) connected to the Internet is increasing, and these VEs generate tasks that contain large amounts of data. Processing these tasks requires a lot of computing resources. Therefore, it is a promising issue that offloading compute-intensive tasks from resource-limited vehicles to Vehicular Edge Computing (VEC) servers, which involves big data transmission, processing and computation. In a network, multiple providers provide VEC servers. When a vehicle generates a task, our goal is to make an intelligent decision on whether and when to offload this task to VEC servers to minimize the task completion time and total big data processing time. When each vehicle passes VEC servers, the vehicle can decide to offload its task to the VEC server in the current communication range, or continue to drive until it reaches the next server's communication range. This issue can be considered as an asset selling problem. It is a challenging issue to make a smart decision for the vehicle with a location view because the vehicle is not sure when the next VEC server will be available and how much about the available computing capacity of the next VEC server. Firstly, this paper formulates the problem as a Markov Decision Process (MDP), defines and analyzes the state set, action set, reward model, and state transition probability distribution. Then it uses Asynchronous Advantage Actor-Critic (A3C) algorithm to solve this MDP problem, builds the various elements of the A3C algorithm, uses Actor (the strategy function) to generate two actions of the vehicle: offloading and moving without offloading. Thirdly, it uses Critic (the value function) to evaluate Actor's behavior, and guide Actor's actions in subsequent stages. The Actor starts from the initial state in the state space until it enters the termination state, forming a complete decision-making process. It minimizes the completion time of task offloading through learning thereby reducing the delay of big data processing. Compared to the Immediately Offload (IO) scheme and Expect Offload (EO) scheme, the MIDP scheme proposed in this paper reduces the average task offloading delay to 29.93% and 29.99%, close to the EO scheme in terms of task completion rate and up to 66.6% improvement compared to the IO scheme. Delay-aware Elsevier Unmanned aerial vehicles Elsevier Big data Elsevier Task offload Elsevier Energy efficient Elsevier Yang, Qiang oth Zhang, Shaobo oth Wang, Tian oth Xiong, Neal N. oth Enthalten in Elsevier Al Asaadi, Zahra ELSEVIER 25. Functional Outcomes Following Orbital Preservation in Patients with Surgical management of Sinonasal Tumours 2022 Amsterdam [u.a.] (DE-627)ELV008974187 volume:167 year:2022 pages:1-17 extent:17 https://doi.org/10.1016/j.jpdc.2022.04.013 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA 44.96 Zahnmedizin VZ AR 167 2022 1-17 17 |
allfieldsGer |
10.1016/j.jpdc.2022.04.013 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001797.pica (DE-627)ELV057987092 (ELSEVIER)S0743-7315(22)00088-0 DE-627 ger DE-627 rakwb eng 610 VZ 44.96 bkl Liu, Shun verfasserin aut MIDP: An MDP-based intelligent big data processing scheme for vehicular edge computing 2022transfer abstract 17 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier The number of Vehicle Equipment (VE) connected to the Internet is increasing, and these VEs generate tasks that contain large amounts of data. Processing these tasks requires a lot of computing resources. Therefore, it is a promising issue that offloading compute-intensive tasks from resource-limited vehicles to Vehicular Edge Computing (VEC) servers, which involves big data transmission, processing and computation. In a network, multiple providers provide VEC servers. When a vehicle generates a task, our goal is to make an intelligent decision on whether and when to offload this task to VEC servers to minimize the task completion time and total big data processing time. When each vehicle passes VEC servers, the vehicle can decide to offload its task to the VEC server in the current communication range, or continue to drive until it reaches the next server's communication range. This issue can be considered as an asset selling problem. It is a challenging issue to make a smart decision for the vehicle with a location view because the vehicle is not sure when the next VEC server will be available and how much about the available computing capacity of the next VEC server. Firstly, this paper formulates the problem as a Markov Decision Process (MDP), defines and analyzes the state set, action set, reward model, and state transition probability distribution. Then it uses Asynchronous Advantage Actor-Critic (A3C) algorithm to solve this MDP problem, builds the various elements of the A3C algorithm, uses Actor (the strategy function) to generate two actions of the vehicle: offloading and moving without offloading. Thirdly, it uses Critic (the value function) to evaluate Actor's behavior, and guide Actor's actions in subsequent stages. The Actor starts from the initial state in the state space until it enters the termination state, forming a complete decision-making process. It minimizes the completion time of task offloading through learning thereby reducing the delay of big data processing. Compared to the Immediately Offload (IO) scheme and Expect Offload (EO) scheme, the MIDP scheme proposed in this paper reduces the average task offloading delay to 29.93% and 29.99%, close to the EO scheme in terms of task completion rate and up to 66.6% improvement compared to the IO scheme. The number of Vehicle Equipment (VE) connected to the Internet is increasing, and these VEs generate tasks that contain large amounts of data. Processing these tasks requires a lot of computing resources. Therefore, it is a promising issue that offloading compute-intensive tasks from resource-limited vehicles to Vehicular Edge Computing (VEC) servers, which involves big data transmission, processing and computation. In a network, multiple providers provide VEC servers. When a vehicle generates a task, our goal is to make an intelligent decision on whether and when to offload this task to VEC servers to minimize the task completion time and total big data processing time. When each vehicle passes VEC servers, the vehicle can decide to offload its task to the VEC server in the current communication range, or continue to drive until it reaches the next server's communication range. This issue can be considered as an asset selling problem. It is a challenging issue to make a smart decision for the vehicle with a location view because the vehicle is not sure when the next VEC server will be available and how much about the available computing capacity of the next VEC server. Firstly, this paper formulates the problem as a Markov Decision Process (MDP), defines and analyzes the state set, action set, reward model, and state transition probability distribution. Then it uses Asynchronous Advantage Actor-Critic (A3C) algorithm to solve this MDP problem, builds the various elements of the A3C algorithm, uses Actor (the strategy function) to generate two actions of the vehicle: offloading and moving without offloading. Thirdly, it uses Critic (the value function) to evaluate Actor's behavior, and guide Actor's actions in subsequent stages. The Actor starts from the initial state in the state space until it enters the termination state, forming a complete decision-making process. It minimizes the completion time of task offloading through learning thereby reducing the delay of big data processing. Compared to the Immediately Offload (IO) scheme and Expect Offload (EO) scheme, the MIDP scheme proposed in this paper reduces the average task offloading delay to 29.93% and 29.99%, close to the EO scheme in terms of task completion rate and up to 66.6% improvement compared to the IO scheme. Delay-aware Elsevier Unmanned aerial vehicles Elsevier Big data Elsevier Task offload Elsevier Energy efficient Elsevier Yang, Qiang oth Zhang, Shaobo oth Wang, Tian oth Xiong, Neal N. oth Enthalten in Elsevier Al Asaadi, Zahra ELSEVIER 25. Functional Outcomes Following Orbital Preservation in Patients with Surgical management of Sinonasal Tumours 2022 Amsterdam [u.a.] (DE-627)ELV008974187 volume:167 year:2022 pages:1-17 extent:17 https://doi.org/10.1016/j.jpdc.2022.04.013 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA 44.96 Zahnmedizin VZ AR 167 2022 1-17 17 |
allfieldsSound |
10.1016/j.jpdc.2022.04.013 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001797.pica (DE-627)ELV057987092 (ELSEVIER)S0743-7315(22)00088-0 DE-627 ger DE-627 rakwb eng 610 VZ 44.96 bkl Liu, Shun verfasserin aut MIDP: An MDP-based intelligent big data processing scheme for vehicular edge computing 2022transfer abstract 17 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier The number of Vehicle Equipment (VE) connected to the Internet is increasing, and these VEs generate tasks that contain large amounts of data. Processing these tasks requires a lot of computing resources. Therefore, it is a promising issue that offloading compute-intensive tasks from resource-limited vehicles to Vehicular Edge Computing (VEC) servers, which involves big data transmission, processing and computation. In a network, multiple providers provide VEC servers. When a vehicle generates a task, our goal is to make an intelligent decision on whether and when to offload this task to VEC servers to minimize the task completion time and total big data processing time. When each vehicle passes VEC servers, the vehicle can decide to offload its task to the VEC server in the current communication range, or continue to drive until it reaches the next server's communication range. This issue can be considered as an asset selling problem. It is a challenging issue to make a smart decision for the vehicle with a location view because the vehicle is not sure when the next VEC server will be available and how much about the available computing capacity of the next VEC server. Firstly, this paper formulates the problem as a Markov Decision Process (MDP), defines and analyzes the state set, action set, reward model, and state transition probability distribution. Then it uses Asynchronous Advantage Actor-Critic (A3C) algorithm to solve this MDP problem, builds the various elements of the A3C algorithm, uses Actor (the strategy function) to generate two actions of the vehicle: offloading and moving without offloading. Thirdly, it uses Critic (the value function) to evaluate Actor's behavior, and guide Actor's actions in subsequent stages. The Actor starts from the initial state in the state space until it enters the termination state, forming a complete decision-making process. It minimizes the completion time of task offloading through learning thereby reducing the delay of big data processing. Compared to the Immediately Offload (IO) scheme and Expect Offload (EO) scheme, the MIDP scheme proposed in this paper reduces the average task offloading delay to 29.93% and 29.99%, close to the EO scheme in terms of task completion rate and up to 66.6% improvement compared to the IO scheme. The number of Vehicle Equipment (VE) connected to the Internet is increasing, and these VEs generate tasks that contain large amounts of data. Processing these tasks requires a lot of computing resources. Therefore, it is a promising issue that offloading compute-intensive tasks from resource-limited vehicles to Vehicular Edge Computing (VEC) servers, which involves big data transmission, processing and computation. In a network, multiple providers provide VEC servers. When a vehicle generates a task, our goal is to make an intelligent decision on whether and when to offload this task to VEC servers to minimize the task completion time and total big data processing time. When each vehicle passes VEC servers, the vehicle can decide to offload its task to the VEC server in the current communication range, or continue to drive until it reaches the next server's communication range. This issue can be considered as an asset selling problem. It is a challenging issue to make a smart decision for the vehicle with a location view because the vehicle is not sure when the next VEC server will be available and how much about the available computing capacity of the next VEC server. Firstly, this paper formulates the problem as a Markov Decision Process (MDP), defines and analyzes the state set, action set, reward model, and state transition probability distribution. Then it uses Asynchronous Advantage Actor-Critic (A3C) algorithm to solve this MDP problem, builds the various elements of the A3C algorithm, uses Actor (the strategy function) to generate two actions of the vehicle: offloading and moving without offloading. Thirdly, it uses Critic (the value function) to evaluate Actor's behavior, and guide Actor's actions in subsequent stages. The Actor starts from the initial state in the state space until it enters the termination state, forming a complete decision-making process. It minimizes the completion time of task offloading through learning thereby reducing the delay of big data processing. Compared to the Immediately Offload (IO) scheme and Expect Offload (EO) scheme, the MIDP scheme proposed in this paper reduces the average task offloading delay to 29.93% and 29.99%, close to the EO scheme in terms of task completion rate and up to 66.6% improvement compared to the IO scheme. Delay-aware Elsevier Unmanned aerial vehicles Elsevier Big data Elsevier Task offload Elsevier Energy efficient Elsevier Yang, Qiang oth Zhang, Shaobo oth Wang, Tian oth Xiong, Neal N. oth Enthalten in Elsevier Al Asaadi, Zahra ELSEVIER 25. Functional Outcomes Following Orbital Preservation in Patients with Surgical management of Sinonasal Tumours 2022 Amsterdam [u.a.] (DE-627)ELV008974187 volume:167 year:2022 pages:1-17 extent:17 https://doi.org/10.1016/j.jpdc.2022.04.013 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA 44.96 Zahnmedizin VZ AR 167 2022 1-17 17 |
language |
English |
source |
Enthalten in 25. Functional Outcomes Following Orbital Preservation in Patients with Surgical management of Sinonasal Tumours Amsterdam [u.a.] volume:167 year:2022 pages:1-17 extent:17 |
sourceStr |
Enthalten in 25. Functional Outcomes Following Orbital Preservation in Patients with Surgical management of Sinonasal Tumours Amsterdam [u.a.] volume:167 year:2022 pages:1-17 extent:17 |
format_phy_str_mv |
Article |
bklname |
Zahnmedizin |
institution |
findex.gbv.de |
topic_facet |
Delay-aware Unmanned aerial vehicles Big data Task offload Energy efficient |
dewey-raw |
610 |
isfreeaccess_bool |
false |
container_title |
25. Functional Outcomes Following Orbital Preservation in Patients with Surgical management of Sinonasal Tumours |
authorswithroles_txt_mv |
Liu, Shun @@aut@@ Yang, Qiang @@oth@@ Zhang, Shaobo @@oth@@ Wang, Tian @@oth@@ Xiong, Neal N. @@oth@@ |
publishDateDaySort_date |
2022-01-01T00:00:00Z |
hierarchy_top_id |
ELV008974187 |
dewey-sort |
3610 |
id |
ELV057987092 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV057987092</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626050058.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">220808s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.jpdc.2022.04.013</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001797.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV057987092</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0743-7315(22)00088-0</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">44.96</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Liu, Shun</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">MIDP: An MDP-based intelligent big data processing scheme for vehicular edge computing</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022transfer abstract</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">17</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">The number of Vehicle Equipment (VE) connected to the Internet is increasing, and these VEs generate tasks that contain large amounts of data. Processing these tasks requires a lot of computing resources. Therefore, it is a promising issue that offloading compute-intensive tasks from resource-limited vehicles to Vehicular Edge Computing (VEC) servers, which involves big data transmission, processing and computation. In a network, multiple providers provide VEC servers. When a vehicle generates a task, our goal is to make an intelligent decision on whether and when to offload this task to VEC servers to minimize the task completion time and total big data processing time. When each vehicle passes VEC servers, the vehicle can decide to offload its task to the VEC server in the current communication range, or continue to drive until it reaches the next server's communication range. This issue can be considered as an asset selling problem. It is a challenging issue to make a smart decision for the vehicle with a location view because the vehicle is not sure when the next VEC server will be available and how much about the available computing capacity of the next VEC server. Firstly, this paper formulates the problem as a Markov Decision Process (MDP), defines and analyzes the state set, action set, reward model, and state transition probability distribution. Then it uses Asynchronous Advantage Actor-Critic (A3C) algorithm to solve this MDP problem, builds the various elements of the A3C algorithm, uses Actor (the strategy function) to generate two actions of the vehicle: offloading and moving without offloading. Thirdly, it uses Critic (the value function) to evaluate Actor's behavior, and guide Actor's actions in subsequent stages. The Actor starts from the initial state in the state space until it enters the termination state, forming a complete decision-making process. It minimizes the completion time of task offloading through learning thereby reducing the delay of big data processing. Compared to the Immediately Offload (IO) scheme and Expect Offload (EO) scheme, the MIDP scheme proposed in this paper reduces the average task offloading delay to 29.93% and 29.99%, close to the EO scheme in terms of task completion rate and up to 66.6% improvement compared to the IO scheme.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">The number of Vehicle Equipment (VE) connected to the Internet is increasing, and these VEs generate tasks that contain large amounts of data. Processing these tasks requires a lot of computing resources. Therefore, it is a promising issue that offloading compute-intensive tasks from resource-limited vehicles to Vehicular Edge Computing (VEC) servers, which involves big data transmission, processing and computation. In a network, multiple providers provide VEC servers. When a vehicle generates a task, our goal is to make an intelligent decision on whether and when to offload this task to VEC servers to minimize the task completion time and total big data processing time. When each vehicle passes VEC servers, the vehicle can decide to offload its task to the VEC server in the current communication range, or continue to drive until it reaches the next server's communication range. This issue can be considered as an asset selling problem. It is a challenging issue to make a smart decision for the vehicle with a location view because the vehicle is not sure when the next VEC server will be available and how much about the available computing capacity of the next VEC server. Firstly, this paper formulates the problem as a Markov Decision Process (MDP), defines and analyzes the state set, action set, reward model, and state transition probability distribution. Then it uses Asynchronous Advantage Actor-Critic (A3C) algorithm to solve this MDP problem, builds the various elements of the A3C algorithm, uses Actor (the strategy function) to generate two actions of the vehicle: offloading and moving without offloading. Thirdly, it uses Critic (the value function) to evaluate Actor's behavior, and guide Actor's actions in subsequent stages. The Actor starts from the initial state in the state space until it enters the termination state, forming a complete decision-making process. It minimizes the completion time of task offloading through learning thereby reducing the delay of big data processing. Compared to the Immediately Offload (IO) scheme and Expect Offload (EO) scheme, the MIDP scheme proposed in this paper reduces the average task offloading delay to 29.93% and 29.99%, close to the EO scheme in terms of task completion rate and up to 66.6% improvement compared to the IO scheme.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Delay-aware</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Unmanned aerial vehicles</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Big data</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Task offload</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Energy efficient</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Yang, Qiang</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhang, Shaobo</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wang, Tian</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Xiong, Neal N.</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Elsevier</subfield><subfield code="a">Al Asaadi, Zahra ELSEVIER</subfield><subfield code="t">25. Functional Outcomes Following Orbital Preservation in Patients with Surgical management of Sinonasal Tumours</subfield><subfield code="d">2022</subfield><subfield code="g">Amsterdam [u.a.]</subfield><subfield code="w">(DE-627)ELV008974187</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:167</subfield><subfield code="g">year:2022</subfield><subfield code="g">pages:1-17</subfield><subfield code="g">extent:17</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.jpdc.2022.04.013</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">44.96</subfield><subfield code="j">Zahnmedizin</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">167</subfield><subfield code="j">2022</subfield><subfield code="h">1-17</subfield><subfield code="g">17</subfield></datafield></record></collection>
|
author |
Liu, Shun |
spellingShingle |
Liu, Shun ddc 610 bkl 44.96 Elsevier Delay-aware Elsevier Unmanned aerial vehicles Elsevier Big data Elsevier Task offload Elsevier Energy efficient MIDP: An MDP-based intelligent big data processing scheme for vehicular edge computing |
authorStr |
Liu, Shun |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)ELV008974187 |
format |
electronic Article |
dewey-ones |
610 - Medicine & health |
delete_txt_mv |
keep |
author_role |
aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
topic_title |
610 VZ 44.96 bkl MIDP: An MDP-based intelligent big data processing scheme for vehicular edge computing Delay-aware Elsevier Unmanned aerial vehicles Elsevier Big data Elsevier Task offload Elsevier Energy efficient Elsevier |
topic |
ddc 610 bkl 44.96 Elsevier Delay-aware Elsevier Unmanned aerial vehicles Elsevier Big data Elsevier Task offload Elsevier Energy efficient |
topic_unstemmed |
ddc 610 bkl 44.96 Elsevier Delay-aware Elsevier Unmanned aerial vehicles Elsevier Big data Elsevier Task offload Elsevier Energy efficient |
topic_browse |
ddc 610 bkl 44.96 Elsevier Delay-aware Elsevier Unmanned aerial vehicles Elsevier Big data Elsevier Task offload Elsevier Energy efficient |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
zu |
author2_variant |
q y qy s z sz t w tw n n x nn nnx |
hierarchy_parent_title |
25. Functional Outcomes Following Orbital Preservation in Patients with Surgical management of Sinonasal Tumours |
hierarchy_parent_id |
ELV008974187 |
dewey-tens |
610 - Medicine & health |
hierarchy_top_title |
25. Functional Outcomes Following Orbital Preservation in Patients with Surgical management of Sinonasal Tumours |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)ELV008974187 |
title |
MIDP: An MDP-based intelligent big data processing scheme for vehicular edge computing |
ctrlnum |
(DE-627)ELV057987092 (ELSEVIER)S0743-7315(22)00088-0 |
title_full |
MIDP: An MDP-based intelligent big data processing scheme for vehicular edge computing |
author_sort |
Liu, Shun |
journal |
25. Functional Outcomes Following Orbital Preservation in Patients with Surgical management of Sinonasal Tumours |
journalStr |
25. Functional Outcomes Following Orbital Preservation in Patients with Surgical management of Sinonasal Tumours |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
600 - Technology |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
zzz |
container_start_page |
1 |
author_browse |
Liu, Shun |
container_volume |
167 |
physical |
17 |
class |
610 VZ 44.96 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Liu, Shun |
doi_str_mv |
10.1016/j.jpdc.2022.04.013 |
dewey-full |
610 |
title_sort |
midp: an mdp-based intelligent big data processing scheme for vehicular edge computing |
title_auth |
MIDP: An MDP-based intelligent big data processing scheme for vehicular edge computing |
abstract |
The number of Vehicle Equipment (VE) connected to the Internet is increasing, and these VEs generate tasks that contain large amounts of data. Processing these tasks requires a lot of computing resources. Therefore, it is a promising issue that offloading compute-intensive tasks from resource-limited vehicles to Vehicular Edge Computing (VEC) servers, which involves big data transmission, processing and computation. In a network, multiple providers provide VEC servers. When a vehicle generates a task, our goal is to make an intelligent decision on whether and when to offload this task to VEC servers to minimize the task completion time and total big data processing time. When each vehicle passes VEC servers, the vehicle can decide to offload its task to the VEC server in the current communication range, or continue to drive until it reaches the next server's communication range. This issue can be considered as an asset selling problem. It is a challenging issue to make a smart decision for the vehicle with a location view because the vehicle is not sure when the next VEC server will be available and how much about the available computing capacity of the next VEC server. Firstly, this paper formulates the problem as a Markov Decision Process (MDP), defines and analyzes the state set, action set, reward model, and state transition probability distribution. Then it uses Asynchronous Advantage Actor-Critic (A3C) algorithm to solve this MDP problem, builds the various elements of the A3C algorithm, uses Actor (the strategy function) to generate two actions of the vehicle: offloading and moving without offloading. Thirdly, it uses Critic (the value function) to evaluate Actor's behavior, and guide Actor's actions in subsequent stages. The Actor starts from the initial state in the state space until it enters the termination state, forming a complete decision-making process. It minimizes the completion time of task offloading through learning thereby reducing the delay of big data processing. Compared to the Immediately Offload (IO) scheme and Expect Offload (EO) scheme, the MIDP scheme proposed in this paper reduces the average task offloading delay to 29.93% and 29.99%, close to the EO scheme in terms of task completion rate and up to 66.6% improvement compared to the IO scheme. |
abstractGer |
The number of Vehicle Equipment (VE) connected to the Internet is increasing, and these VEs generate tasks that contain large amounts of data. Processing these tasks requires a lot of computing resources. Therefore, it is a promising issue that offloading compute-intensive tasks from resource-limited vehicles to Vehicular Edge Computing (VEC) servers, which involves big data transmission, processing and computation. In a network, multiple providers provide VEC servers. When a vehicle generates a task, our goal is to make an intelligent decision on whether and when to offload this task to VEC servers to minimize the task completion time and total big data processing time. When each vehicle passes VEC servers, the vehicle can decide to offload its task to the VEC server in the current communication range, or continue to drive until it reaches the next server's communication range. This issue can be considered as an asset selling problem. It is a challenging issue to make a smart decision for the vehicle with a location view because the vehicle is not sure when the next VEC server will be available and how much about the available computing capacity of the next VEC server. Firstly, this paper formulates the problem as a Markov Decision Process (MDP), defines and analyzes the state set, action set, reward model, and state transition probability distribution. Then it uses Asynchronous Advantage Actor-Critic (A3C) algorithm to solve this MDP problem, builds the various elements of the A3C algorithm, uses Actor (the strategy function) to generate two actions of the vehicle: offloading and moving without offloading. Thirdly, it uses Critic (the value function) to evaluate Actor's behavior, and guide Actor's actions in subsequent stages. The Actor starts from the initial state in the state space until it enters the termination state, forming a complete decision-making process. It minimizes the completion time of task offloading through learning thereby reducing the delay of big data processing. Compared to the Immediately Offload (IO) scheme and Expect Offload (EO) scheme, the MIDP scheme proposed in this paper reduces the average task offloading delay to 29.93% and 29.99%, close to the EO scheme in terms of task completion rate and up to 66.6% improvement compared to the IO scheme. |
abstract_unstemmed |
The number of Vehicle Equipment (VE) connected to the Internet is increasing, and these VEs generate tasks that contain large amounts of data. Processing these tasks requires a lot of computing resources. Therefore, it is a promising issue that offloading compute-intensive tasks from resource-limited vehicles to Vehicular Edge Computing (VEC) servers, which involves big data transmission, processing and computation. In a network, multiple providers provide VEC servers. When a vehicle generates a task, our goal is to make an intelligent decision on whether and when to offload this task to VEC servers to minimize the task completion time and total big data processing time. When each vehicle passes VEC servers, the vehicle can decide to offload its task to the VEC server in the current communication range, or continue to drive until it reaches the next server's communication range. This issue can be considered as an asset selling problem. It is a challenging issue to make a smart decision for the vehicle with a location view because the vehicle is not sure when the next VEC server will be available and how much about the available computing capacity of the next VEC server. Firstly, this paper formulates the problem as a Markov Decision Process (MDP), defines and analyzes the state set, action set, reward model, and state transition probability distribution. Then it uses Asynchronous Advantage Actor-Critic (A3C) algorithm to solve this MDP problem, builds the various elements of the A3C algorithm, uses Actor (the strategy function) to generate two actions of the vehicle: offloading and moving without offloading. Thirdly, it uses Critic (the value function) to evaluate Actor's behavior, and guide Actor's actions in subsequent stages. The Actor starts from the initial state in the state space until it enters the termination state, forming a complete decision-making process. It minimizes the completion time of task offloading through learning thereby reducing the delay of big data processing. Compared to the Immediately Offload (IO) scheme and Expect Offload (EO) scheme, the MIDP scheme proposed in this paper reduces the average task offloading delay to 29.93% and 29.99%, close to the EO scheme in terms of task completion rate and up to 66.6% improvement compared to the IO scheme. |
collection_details |
GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA |
title_short |
MIDP: An MDP-based intelligent big data processing scheme for vehicular edge computing |
url |
https://doi.org/10.1016/j.jpdc.2022.04.013 |
remote_bool |
true |
author2 |
Yang, Qiang Zhang, Shaobo Wang, Tian Xiong, Neal N. |
author2Str |
Yang, Qiang Zhang, Shaobo Wang, Tian Xiong, Neal N. |
ppnlink |
ELV008974187 |
mediatype_str_mv |
z |
isOA_txt |
false |
hochschulschrift_bool |
false |
author2_role |
oth oth oth oth |
doi_str |
10.1016/j.jpdc.2022.04.013 |
up_date |
2024-07-06T17:44:10.167Z |
_version_ |
1803852552778809344 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV057987092</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626050058.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">220808s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.jpdc.2022.04.013</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001797.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV057987092</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0743-7315(22)00088-0</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">44.96</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Liu, Shun</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">MIDP: An MDP-based intelligent big data processing scheme for vehicular edge computing</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022transfer abstract</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">17</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">The number of Vehicle Equipment (VE) connected to the Internet is increasing, and these VEs generate tasks that contain large amounts of data. Processing these tasks requires a lot of computing resources. Therefore, it is a promising issue that offloading compute-intensive tasks from resource-limited vehicles to Vehicular Edge Computing (VEC) servers, which involves big data transmission, processing and computation. In a network, multiple providers provide VEC servers. When a vehicle generates a task, our goal is to make an intelligent decision on whether and when to offload this task to VEC servers to minimize the task completion time and total big data processing time. When each vehicle passes VEC servers, the vehicle can decide to offload its task to the VEC server in the current communication range, or continue to drive until it reaches the next server's communication range. This issue can be considered as an asset selling problem. It is a challenging issue to make a smart decision for the vehicle with a location view because the vehicle is not sure when the next VEC server will be available and how much about the available computing capacity of the next VEC server. Firstly, this paper formulates the problem as a Markov Decision Process (MDP), defines and analyzes the state set, action set, reward model, and state transition probability distribution. Then it uses Asynchronous Advantage Actor-Critic (A3C) algorithm to solve this MDP problem, builds the various elements of the A3C algorithm, uses Actor (the strategy function) to generate two actions of the vehicle: offloading and moving without offloading. Thirdly, it uses Critic (the value function) to evaluate Actor's behavior, and guide Actor's actions in subsequent stages. The Actor starts from the initial state in the state space until it enters the termination state, forming a complete decision-making process. It minimizes the completion time of task offloading through learning thereby reducing the delay of big data processing. Compared to the Immediately Offload (IO) scheme and Expect Offload (EO) scheme, the MIDP scheme proposed in this paper reduces the average task offloading delay to 29.93% and 29.99%, close to the EO scheme in terms of task completion rate and up to 66.6% improvement compared to the IO scheme.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">The number of Vehicle Equipment (VE) connected to the Internet is increasing, and these VEs generate tasks that contain large amounts of data. Processing these tasks requires a lot of computing resources. Therefore, it is a promising issue that offloading compute-intensive tasks from resource-limited vehicles to Vehicular Edge Computing (VEC) servers, which involves big data transmission, processing and computation. In a network, multiple providers provide VEC servers. When a vehicle generates a task, our goal is to make an intelligent decision on whether and when to offload this task to VEC servers to minimize the task completion time and total big data processing time. When each vehicle passes VEC servers, the vehicle can decide to offload its task to the VEC server in the current communication range, or continue to drive until it reaches the next server's communication range. This issue can be considered as an asset selling problem. It is a challenging issue to make a smart decision for the vehicle with a location view because the vehicle is not sure when the next VEC server will be available and how much about the available computing capacity of the next VEC server. Firstly, this paper formulates the problem as a Markov Decision Process (MDP), defines and analyzes the state set, action set, reward model, and state transition probability distribution. Then it uses Asynchronous Advantage Actor-Critic (A3C) algorithm to solve this MDP problem, builds the various elements of the A3C algorithm, uses Actor (the strategy function) to generate two actions of the vehicle: offloading and moving without offloading. Thirdly, it uses Critic (the value function) to evaluate Actor's behavior, and guide Actor's actions in subsequent stages. The Actor starts from the initial state in the state space until it enters the termination state, forming a complete decision-making process. It minimizes the completion time of task offloading through learning thereby reducing the delay of big data processing. Compared to the Immediately Offload (IO) scheme and Expect Offload (EO) scheme, the MIDP scheme proposed in this paper reduces the average task offloading delay to 29.93% and 29.99%, close to the EO scheme in terms of task completion rate and up to 66.6% improvement compared to the IO scheme.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Delay-aware</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Unmanned aerial vehicles</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Big data</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Task offload</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Energy efficient</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Yang, Qiang</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhang, Shaobo</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wang, Tian</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Xiong, Neal N.</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Elsevier</subfield><subfield code="a">Al Asaadi, Zahra ELSEVIER</subfield><subfield code="t">25. Functional Outcomes Following Orbital Preservation in Patients with Surgical management of Sinonasal Tumours</subfield><subfield code="d">2022</subfield><subfield code="g">Amsterdam [u.a.]</subfield><subfield code="w">(DE-627)ELV008974187</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:167</subfield><subfield code="g">year:2022</subfield><subfield code="g">pages:1-17</subfield><subfield code="g">extent:17</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.jpdc.2022.04.013</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">44.96</subfield><subfield code="j">Zahnmedizin</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">167</subfield><subfield code="j">2022</subfield><subfield code="h">1-17</subfield><subfield code="g">17</subfield></datafield></record></collection>
|
score |
7.4012003 |