Building Heat Demand Prediction Based on Reinforcement Learning for Thermal Comfort Management
The accurate prediction of building heat demand plays the critical role in refined management of heating, which is the basis for on-demand heating operation. This paper proposed a prediction model framework for building heat demand based on reinforcement learning. The environment, reward function an...
Ausführliche Beschreibung
Autor*in: |
Chendong Wang [verfasserIn] Lihong Zheng [verfasserIn] Jianjuan Yuan [verfasserIn] Ke Huang [verfasserIn] Zhihua Zhou [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
In: Energies - MDPI AG, 2008, 15(2022), 21, p 7856 |
---|---|
Übergeordnetes Werk: |
volume:15 ; year:2022 ; number:21, p 7856 |
Links: |
---|
DOI / URN: |
10.3390/en15217856 |
---|
Katalog-ID: |
DOAJ083619070 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ083619070 | ||
003 | DE-627 | ||
005 | 20240414172630.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230311s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.3390/en15217856 |2 doi | |
035 | |a (DE-627)DOAJ083619070 | ||
035 | |a (DE-599)DOAJ018b123f4da3405bab6aef25061f3676 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 0 | |a Chendong Wang |e verfasserin |4 aut | |
245 | 1 | 0 | |a Building Heat Demand Prediction Based on Reinforcement Learning for Thermal Comfort Management |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a The accurate prediction of building heat demand plays the critical role in refined management of heating, which is the basis for on-demand heating operation. This paper proposed a prediction model framework for building heat demand based on reinforcement learning. The environment, reward function and agent of the model were established, and experiments were carried out to verify the effectiveness and advancement of the model. Through the building heat demand prediction, the model proposed in this study can dynamically control the indoor temperature within the acceptable interval (19–23 °C). Moreover, the experimental results showed that after the model reached the primary, intermediate and advanced targets in training, the proportion of time that the indoor temperature can be controlled within the target interval (20.5–21.5 °C) was over 35%, 55% and 70%, respectively. In addition to maintaining indoor temperature, the model proposed in this study also achieved on-demand heating operation. The model achieving the advanced target, which had the best indoor temperature control performance, only had a supply–demand error of 4.56%. | ||
650 | 4 | |a reinforcement learning | |
650 | 4 | |a heat demand prediction | |
650 | 4 | |a on-demand heating operation | |
650 | 4 | |a deep learning | |
653 | 0 | |a Technology | |
653 | 0 | |a T | |
700 | 0 | |a Lihong Zheng |e verfasserin |4 aut | |
700 | 0 | |a Jianjuan Yuan |e verfasserin |4 aut | |
700 | 0 | |a Ke Huang |e verfasserin |4 aut | |
700 | 0 | |a Zhihua Zhou |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t Energies |d MDPI AG, 2008 |g 15(2022), 21, p 7856 |w (DE-627)572083742 |w (DE-600)2437446-5 |x 19961073 |7 nnns |
773 | 1 | 8 | |g volume:15 |g year:2022 |g number:21, p 7856 |
856 | 4 | 0 | |u https://doi.org/10.3390/en15217856 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/018b123f4da3405bab6aef25061f3676 |z kostenfrei |
856 | 4 | 0 | |u https://www.mdpi.com/1996-1073/15/21/7856 |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/1996-1073 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_206 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2108 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2119 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 15 |j 2022 |e 21, p 7856 |
author_variant |
c w cw l z lz j y jy k h kh z z zz |
---|---|
matchkey_str |
article:19961073:2022----::uligeteadrdcinaeorifreeterigot |
hierarchy_sort_str |
2022 |
publishDate |
2022 |
allfields |
10.3390/en15217856 doi (DE-627)DOAJ083619070 (DE-599)DOAJ018b123f4da3405bab6aef25061f3676 DE-627 ger DE-627 rakwb eng Chendong Wang verfasserin aut Building Heat Demand Prediction Based on Reinforcement Learning for Thermal Comfort Management 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier The accurate prediction of building heat demand plays the critical role in refined management of heating, which is the basis for on-demand heating operation. This paper proposed a prediction model framework for building heat demand based on reinforcement learning. The environment, reward function and agent of the model were established, and experiments were carried out to verify the effectiveness and advancement of the model. Through the building heat demand prediction, the model proposed in this study can dynamically control the indoor temperature within the acceptable interval (19–23 °C). Moreover, the experimental results showed that after the model reached the primary, intermediate and advanced targets in training, the proportion of time that the indoor temperature can be controlled within the target interval (20.5–21.5 °C) was over 35%, 55% and 70%, respectively. In addition to maintaining indoor temperature, the model proposed in this study also achieved on-demand heating operation. The model achieving the advanced target, which had the best indoor temperature control performance, only had a supply–demand error of 4.56%. reinforcement learning heat demand prediction on-demand heating operation deep learning Technology T Lihong Zheng verfasserin aut Jianjuan Yuan verfasserin aut Ke Huang verfasserin aut Zhihua Zhou verfasserin aut In Energies MDPI AG, 2008 15(2022), 21, p 7856 (DE-627)572083742 (DE-600)2437446-5 19961073 nnns volume:15 year:2022 number:21, p 7856 https://doi.org/10.3390/en15217856 kostenfrei https://doaj.org/article/018b123f4da3405bab6aef25061f3676 kostenfrei https://www.mdpi.com/1996-1073/15/21/7856 kostenfrei https://doaj.org/toc/1996-1073 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2119 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 15 2022 21, p 7856 |
spelling |
10.3390/en15217856 doi (DE-627)DOAJ083619070 (DE-599)DOAJ018b123f4da3405bab6aef25061f3676 DE-627 ger DE-627 rakwb eng Chendong Wang verfasserin aut Building Heat Demand Prediction Based on Reinforcement Learning for Thermal Comfort Management 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier The accurate prediction of building heat demand plays the critical role in refined management of heating, which is the basis for on-demand heating operation. This paper proposed a prediction model framework for building heat demand based on reinforcement learning. The environment, reward function and agent of the model were established, and experiments were carried out to verify the effectiveness and advancement of the model. Through the building heat demand prediction, the model proposed in this study can dynamically control the indoor temperature within the acceptable interval (19–23 °C). Moreover, the experimental results showed that after the model reached the primary, intermediate and advanced targets in training, the proportion of time that the indoor temperature can be controlled within the target interval (20.5–21.5 °C) was over 35%, 55% and 70%, respectively. In addition to maintaining indoor temperature, the model proposed in this study also achieved on-demand heating operation. The model achieving the advanced target, which had the best indoor temperature control performance, only had a supply–demand error of 4.56%. reinforcement learning heat demand prediction on-demand heating operation deep learning Technology T Lihong Zheng verfasserin aut Jianjuan Yuan verfasserin aut Ke Huang verfasserin aut Zhihua Zhou verfasserin aut In Energies MDPI AG, 2008 15(2022), 21, p 7856 (DE-627)572083742 (DE-600)2437446-5 19961073 nnns volume:15 year:2022 number:21, p 7856 https://doi.org/10.3390/en15217856 kostenfrei https://doaj.org/article/018b123f4da3405bab6aef25061f3676 kostenfrei https://www.mdpi.com/1996-1073/15/21/7856 kostenfrei https://doaj.org/toc/1996-1073 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2119 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 15 2022 21, p 7856 |
allfields_unstemmed |
10.3390/en15217856 doi (DE-627)DOAJ083619070 (DE-599)DOAJ018b123f4da3405bab6aef25061f3676 DE-627 ger DE-627 rakwb eng Chendong Wang verfasserin aut Building Heat Demand Prediction Based on Reinforcement Learning for Thermal Comfort Management 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier The accurate prediction of building heat demand plays the critical role in refined management of heating, which is the basis for on-demand heating operation. This paper proposed a prediction model framework for building heat demand based on reinforcement learning. The environment, reward function and agent of the model were established, and experiments were carried out to verify the effectiveness and advancement of the model. Through the building heat demand prediction, the model proposed in this study can dynamically control the indoor temperature within the acceptable interval (19–23 °C). Moreover, the experimental results showed that after the model reached the primary, intermediate and advanced targets in training, the proportion of time that the indoor temperature can be controlled within the target interval (20.5–21.5 °C) was over 35%, 55% and 70%, respectively. In addition to maintaining indoor temperature, the model proposed in this study also achieved on-demand heating operation. The model achieving the advanced target, which had the best indoor temperature control performance, only had a supply–demand error of 4.56%. reinforcement learning heat demand prediction on-demand heating operation deep learning Technology T Lihong Zheng verfasserin aut Jianjuan Yuan verfasserin aut Ke Huang verfasserin aut Zhihua Zhou verfasserin aut In Energies MDPI AG, 2008 15(2022), 21, p 7856 (DE-627)572083742 (DE-600)2437446-5 19961073 nnns volume:15 year:2022 number:21, p 7856 https://doi.org/10.3390/en15217856 kostenfrei https://doaj.org/article/018b123f4da3405bab6aef25061f3676 kostenfrei https://www.mdpi.com/1996-1073/15/21/7856 kostenfrei https://doaj.org/toc/1996-1073 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2119 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 15 2022 21, p 7856 |
allfieldsGer |
10.3390/en15217856 doi (DE-627)DOAJ083619070 (DE-599)DOAJ018b123f4da3405bab6aef25061f3676 DE-627 ger DE-627 rakwb eng Chendong Wang verfasserin aut Building Heat Demand Prediction Based on Reinforcement Learning for Thermal Comfort Management 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier The accurate prediction of building heat demand plays the critical role in refined management of heating, which is the basis for on-demand heating operation. This paper proposed a prediction model framework for building heat demand based on reinforcement learning. The environment, reward function and agent of the model were established, and experiments were carried out to verify the effectiveness and advancement of the model. Through the building heat demand prediction, the model proposed in this study can dynamically control the indoor temperature within the acceptable interval (19–23 °C). Moreover, the experimental results showed that after the model reached the primary, intermediate and advanced targets in training, the proportion of time that the indoor temperature can be controlled within the target interval (20.5–21.5 °C) was over 35%, 55% and 70%, respectively. In addition to maintaining indoor temperature, the model proposed in this study also achieved on-demand heating operation. The model achieving the advanced target, which had the best indoor temperature control performance, only had a supply–demand error of 4.56%. reinforcement learning heat demand prediction on-demand heating operation deep learning Technology T Lihong Zheng verfasserin aut Jianjuan Yuan verfasserin aut Ke Huang verfasserin aut Zhihua Zhou verfasserin aut In Energies MDPI AG, 2008 15(2022), 21, p 7856 (DE-627)572083742 (DE-600)2437446-5 19961073 nnns volume:15 year:2022 number:21, p 7856 https://doi.org/10.3390/en15217856 kostenfrei https://doaj.org/article/018b123f4da3405bab6aef25061f3676 kostenfrei https://www.mdpi.com/1996-1073/15/21/7856 kostenfrei https://doaj.org/toc/1996-1073 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2119 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 15 2022 21, p 7856 |
allfieldsSound |
10.3390/en15217856 doi (DE-627)DOAJ083619070 (DE-599)DOAJ018b123f4da3405bab6aef25061f3676 DE-627 ger DE-627 rakwb eng Chendong Wang verfasserin aut Building Heat Demand Prediction Based on Reinforcement Learning for Thermal Comfort Management 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier The accurate prediction of building heat demand plays the critical role in refined management of heating, which is the basis for on-demand heating operation. This paper proposed a prediction model framework for building heat demand based on reinforcement learning. The environment, reward function and agent of the model were established, and experiments were carried out to verify the effectiveness and advancement of the model. Through the building heat demand prediction, the model proposed in this study can dynamically control the indoor temperature within the acceptable interval (19–23 °C). Moreover, the experimental results showed that after the model reached the primary, intermediate and advanced targets in training, the proportion of time that the indoor temperature can be controlled within the target interval (20.5–21.5 °C) was over 35%, 55% and 70%, respectively. In addition to maintaining indoor temperature, the model proposed in this study also achieved on-demand heating operation. The model achieving the advanced target, which had the best indoor temperature control performance, only had a supply–demand error of 4.56%. reinforcement learning heat demand prediction on-demand heating operation deep learning Technology T Lihong Zheng verfasserin aut Jianjuan Yuan verfasserin aut Ke Huang verfasserin aut Zhihua Zhou verfasserin aut In Energies MDPI AG, 2008 15(2022), 21, p 7856 (DE-627)572083742 (DE-600)2437446-5 19961073 nnns volume:15 year:2022 number:21, p 7856 https://doi.org/10.3390/en15217856 kostenfrei https://doaj.org/article/018b123f4da3405bab6aef25061f3676 kostenfrei https://www.mdpi.com/1996-1073/15/21/7856 kostenfrei https://doaj.org/toc/1996-1073 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2119 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 15 2022 21, p 7856 |
language |
English |
source |
In Energies 15(2022), 21, p 7856 volume:15 year:2022 number:21, p 7856 |
sourceStr |
In Energies 15(2022), 21, p 7856 volume:15 year:2022 number:21, p 7856 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
reinforcement learning heat demand prediction on-demand heating operation deep learning Technology T |
isfreeaccess_bool |
true |
container_title |
Energies |
authorswithroles_txt_mv |
Chendong Wang @@aut@@ Lihong Zheng @@aut@@ Jianjuan Yuan @@aut@@ Ke Huang @@aut@@ Zhihua Zhou @@aut@@ |
publishDateDaySort_date |
2022-01-01T00:00:00Z |
hierarchy_top_id |
572083742 |
id |
DOAJ083619070 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ083619070</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240414172630.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230311s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.3390/en15217856</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ083619070</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ018b123f4da3405bab6aef25061f3676</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Chendong Wang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Building Heat Demand Prediction Based on Reinforcement Learning for Thermal Comfort Management</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">The accurate prediction of building heat demand plays the critical role in refined management of heating, which is the basis for on-demand heating operation. This paper proposed a prediction model framework for building heat demand based on reinforcement learning. The environment, reward function and agent of the model were established, and experiments were carried out to verify the effectiveness and advancement of the model. Through the building heat demand prediction, the model proposed in this study can dynamically control the indoor temperature within the acceptable interval (19–23 °C). Moreover, the experimental results showed that after the model reached the primary, intermediate and advanced targets in training, the proportion of time that the indoor temperature can be controlled within the target interval (20.5–21.5 °C) was over 35%, 55% and 70%, respectively. In addition to maintaining indoor temperature, the model proposed in this study also achieved on-demand heating operation. The model achieving the advanced target, which had the best indoor temperature control performance, only had a supply–demand error of 4.56%.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">reinforcement learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">heat demand prediction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">on-demand heating operation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">deep learning</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Technology</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">T</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Lihong Zheng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Jianjuan Yuan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Ke Huang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zhihua Zhou</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Energies</subfield><subfield code="d">MDPI AG, 2008</subfield><subfield code="g">15(2022), 21, p 7856</subfield><subfield code="w">(DE-627)572083742</subfield><subfield code="w">(DE-600)2437446-5</subfield><subfield code="x">19961073</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:15</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:21, p 7856</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.3390/en15217856</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/018b123f4da3405bab6aef25061f3676</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://www.mdpi.com/1996-1073/15/21/7856</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1996-1073</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_206</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2119</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">15</subfield><subfield code="j">2022</subfield><subfield code="e">21, p 7856</subfield></datafield></record></collection>
|
author |
Chendong Wang |
spellingShingle |
Chendong Wang misc reinforcement learning misc heat demand prediction misc on-demand heating operation misc deep learning misc Technology misc T Building Heat Demand Prediction Based on Reinforcement Learning for Thermal Comfort Management |
authorStr |
Chendong Wang |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)572083742 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut |
collection |
DOAJ |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
19961073 |
topic_title |
Building Heat Demand Prediction Based on Reinforcement Learning for Thermal Comfort Management reinforcement learning heat demand prediction on-demand heating operation deep learning |
topic |
misc reinforcement learning misc heat demand prediction misc on-demand heating operation misc deep learning misc Technology misc T |
topic_unstemmed |
misc reinforcement learning misc heat demand prediction misc on-demand heating operation misc deep learning misc Technology misc T |
topic_browse |
misc reinforcement learning misc heat demand prediction misc on-demand heating operation misc deep learning misc Technology misc T |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Energies |
hierarchy_parent_id |
572083742 |
hierarchy_top_title |
Energies |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)572083742 (DE-600)2437446-5 |
title |
Building Heat Demand Prediction Based on Reinforcement Learning for Thermal Comfort Management |
ctrlnum |
(DE-627)DOAJ083619070 (DE-599)DOAJ018b123f4da3405bab6aef25061f3676 |
title_full |
Building Heat Demand Prediction Based on Reinforcement Learning for Thermal Comfort Management |
author_sort |
Chendong Wang |
journal |
Energies |
journalStr |
Energies |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
txt |
author_browse |
Chendong Wang Lihong Zheng Jianjuan Yuan Ke Huang Zhihua Zhou |
container_volume |
15 |
format_se |
Elektronische Aufsätze |
author-letter |
Chendong Wang |
doi_str_mv |
10.3390/en15217856 |
author2-role |
verfasserin |
title_sort |
building heat demand prediction based on reinforcement learning for thermal comfort management |
title_auth |
Building Heat Demand Prediction Based on Reinforcement Learning for Thermal Comfort Management |
abstract |
The accurate prediction of building heat demand plays the critical role in refined management of heating, which is the basis for on-demand heating operation. This paper proposed a prediction model framework for building heat demand based on reinforcement learning. The environment, reward function and agent of the model were established, and experiments were carried out to verify the effectiveness and advancement of the model. Through the building heat demand prediction, the model proposed in this study can dynamically control the indoor temperature within the acceptable interval (19–23 °C). Moreover, the experimental results showed that after the model reached the primary, intermediate and advanced targets in training, the proportion of time that the indoor temperature can be controlled within the target interval (20.5–21.5 °C) was over 35%, 55% and 70%, respectively. In addition to maintaining indoor temperature, the model proposed in this study also achieved on-demand heating operation. The model achieving the advanced target, which had the best indoor temperature control performance, only had a supply–demand error of 4.56%. |
abstractGer |
The accurate prediction of building heat demand plays the critical role in refined management of heating, which is the basis for on-demand heating operation. This paper proposed a prediction model framework for building heat demand based on reinforcement learning. The environment, reward function and agent of the model were established, and experiments were carried out to verify the effectiveness and advancement of the model. Through the building heat demand prediction, the model proposed in this study can dynamically control the indoor temperature within the acceptable interval (19–23 °C). Moreover, the experimental results showed that after the model reached the primary, intermediate and advanced targets in training, the proportion of time that the indoor temperature can be controlled within the target interval (20.5–21.5 °C) was over 35%, 55% and 70%, respectively. In addition to maintaining indoor temperature, the model proposed in this study also achieved on-demand heating operation. The model achieving the advanced target, which had the best indoor temperature control performance, only had a supply–demand error of 4.56%. |
abstract_unstemmed |
The accurate prediction of building heat demand plays the critical role in refined management of heating, which is the basis for on-demand heating operation. This paper proposed a prediction model framework for building heat demand based on reinforcement learning. The environment, reward function and agent of the model were established, and experiments were carried out to verify the effectiveness and advancement of the model. Through the building heat demand prediction, the model proposed in this study can dynamically control the indoor temperature within the acceptable interval (19–23 °C). Moreover, the experimental results showed that after the model reached the primary, intermediate and advanced targets in training, the proportion of time that the indoor temperature can be controlled within the target interval (20.5–21.5 °C) was over 35%, 55% and 70%, respectively. In addition to maintaining indoor temperature, the model proposed in this study also achieved on-demand heating operation. The model achieving the advanced target, which had the best indoor temperature control performance, only had a supply–demand error of 4.56%. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2119 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
container_issue |
21, p 7856 |
title_short |
Building Heat Demand Prediction Based on Reinforcement Learning for Thermal Comfort Management |
url |
https://doi.org/10.3390/en15217856 https://doaj.org/article/018b123f4da3405bab6aef25061f3676 https://www.mdpi.com/1996-1073/15/21/7856 https://doaj.org/toc/1996-1073 |
remote_bool |
true |
author2 |
Lihong Zheng Jianjuan Yuan Ke Huang Zhihua Zhou |
author2Str |
Lihong Zheng Jianjuan Yuan Ke Huang Zhihua Zhou |
ppnlink |
572083742 |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.3390/en15217856 |
up_date |
2024-07-03T18:28:55.087Z |
_version_ |
1803583577238011904 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ083619070</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240414172630.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230311s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.3390/en15217856</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ083619070</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ018b123f4da3405bab6aef25061f3676</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Chendong Wang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Building Heat Demand Prediction Based on Reinforcement Learning for Thermal Comfort Management</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">The accurate prediction of building heat demand plays the critical role in refined management of heating, which is the basis for on-demand heating operation. This paper proposed a prediction model framework for building heat demand based on reinforcement learning. The environment, reward function and agent of the model were established, and experiments were carried out to verify the effectiveness and advancement of the model. Through the building heat demand prediction, the model proposed in this study can dynamically control the indoor temperature within the acceptable interval (19–23 °C). Moreover, the experimental results showed that after the model reached the primary, intermediate and advanced targets in training, the proportion of time that the indoor temperature can be controlled within the target interval (20.5–21.5 °C) was over 35%, 55% and 70%, respectively. In addition to maintaining indoor temperature, the model proposed in this study also achieved on-demand heating operation. The model achieving the advanced target, which had the best indoor temperature control performance, only had a supply–demand error of 4.56%.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">reinforcement learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">heat demand prediction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">on-demand heating operation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">deep learning</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Technology</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">T</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Lihong Zheng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Jianjuan Yuan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Ke Huang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zhihua Zhou</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Energies</subfield><subfield code="d">MDPI AG, 2008</subfield><subfield code="g">15(2022), 21, p 7856</subfield><subfield code="w">(DE-627)572083742</subfield><subfield code="w">(DE-600)2437446-5</subfield><subfield code="x">19961073</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:15</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:21, p 7856</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.3390/en15217856</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/018b123f4da3405bab6aef25061f3676</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://www.mdpi.com/1996-1073/15/21/7856</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1996-1073</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_206</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2119</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">15</subfield><subfield code="j">2022</subfield><subfield code="e">21, p 7856</subfield></datafield></record></collection>
|
score |
7.401165 |