Asynchronous learning for actor–critic neural networks and synchronous triggering for multiplayer system
In this paper, based on actor–critic neural network structure and reinforcement learning scheme, a novel asynchronous learning algorithm with event communication is developed, so as to solve Nash equilibrium of multiplayer nonzero-sum differential game in an adaptive fashion. From the point of optim...
Ausführliche Beschreibung
Autor*in: |
Wang, Ke [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022transfer abstract |
---|
Schlagwörter: |
---|
Umfang: |
14 |
---|
Übergeordnetes Werk: |
Enthalten in: Selective extraction, structural characterisation and antifungal activity assessment of napins from an industrial rapeseed meal - 2012, the science and engineering of measurement and automation, Amsterdam [u.a.] |
---|---|
Übergeordnetes Werk: |
volume:129 ; year:2022 ; pages:295-308 ; extent:14 |
Links: |
---|
DOI / URN: |
10.1016/j.isatra.2022.02.007 |
---|
Katalog-ID: |
ELV059137274 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV059137274 | ||
003 | DE-627 | ||
005 | 20230626052306.0 | ||
007 | cr uuu---uuuuu | ||
008 | 221103s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.isatra.2022.02.007 |2 doi | |
028 | 5 | 2 | |a /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001923.pica |
035 | |a (DE-627)ELV059137274 | ||
035 | |a (ELSEVIER)S0019-0578(22)00066-0 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 540 |q VZ |
082 | 0 | 4 | |a 660 |q VZ |
082 | 0 | 4 | |a 540 |q VZ |
084 | |a 35.00 |2 bkl | ||
100 | 1 | |a Wang, Ke |e verfasserin |4 aut | |
245 | 1 | 0 | |a Asynchronous learning for actor–critic neural networks and synchronous triggering for multiplayer system |
264 | 1 | |c 2022transfer abstract | |
300 | |a 14 | ||
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a nicht spezifiziert |b z |2 rdamedia | ||
338 | |a nicht spezifiziert |b zu |2 rdacarrier | ||
520 | |a In this paper, based on actor–critic neural network structure and reinforcement learning scheme, a novel asynchronous learning algorithm with event communication is developed, so as to solve Nash equilibrium of multiplayer nonzero-sum differential game in an adaptive fashion. From the point of optimal control view, each player or local controller wants to minimize the individual infinite-time cost function by finding an optimal policy. In this novel learning framework, each player consists of one critic and one actor, and implements distributed asynchronous policy iteration to optimize decision-making process. In addition, communication burden between the system and players is effectively reduced by setting up a central event generator. Critic network executes fast updates by gradient-descent adaption while actor network gives event-induced updates using the gradient projection. The closed-loop asymptotic stability is ensured along with uniform ultimate convergence. Then, the effectiveness of the proposed algorithm is substantiated on a four-player nonlinear system, revealing that it can significantly reduce sampling numbers without impairing learning accuracy. Finally, by leveraging nonzero-sum game idea, the proposed learning scheme is also applied to solve the lateral-directional stability of a linear aircraft system, and is further extended to a nonlinear vehicle system for achieving adaptive cruise control. | ||
520 | |a In this paper, based on actor–critic neural network structure and reinforcement learning scheme, a novel asynchronous learning algorithm with event communication is developed, so as to solve Nash equilibrium of multiplayer nonzero-sum differential game in an adaptive fashion. From the point of optimal control view, each player or local controller wants to minimize the individual infinite-time cost function by finding an optimal policy. In this novel learning framework, each player consists of one critic and one actor, and implements distributed asynchronous policy iteration to optimize decision-making process. In addition, communication burden between the system and players is effectively reduced by setting up a central event generator. Critic network executes fast updates by gradient-descent adaption while actor network gives event-induced updates using the gradient projection. The closed-loop asymptotic stability is ensured along with uniform ultimate convergence. Then, the effectiveness of the proposed algorithm is substantiated on a four-player nonlinear system, revealing that it can significantly reduce sampling numbers without impairing learning accuracy. Finally, by leveraging nonzero-sum game idea, the proposed learning scheme is also applied to solve the lateral-directional stability of a linear aircraft system, and is further extended to a nonlinear vehicle system for achieving adaptive cruise control. | ||
650 | 7 | |a Event-triggered communication |2 Elsevier | |
650 | 7 | |a Actor–critic |2 Elsevier | |
650 | 7 | |a Nonzero-sum differential game |2 Elsevier | |
650 | 7 | |a Neural network |2 Elsevier | |
650 | 7 | |a Asynchronous learning |2 Elsevier | |
650 | 7 | |a Synchronous triggering |2 Elsevier | |
700 | 1 | |a Mu, Chaoxu |4 oth | |
773 | 0 | 8 | |i Enthalten in |n Elsevier |t Selective extraction, structural characterisation and antifungal activity assessment of napins from an industrial rapeseed meal |d 2012 |d the science and engineering of measurement and automation |g Amsterdam [u.a.] |w (DE-627)ELV011067004 |
773 | 1 | 8 | |g volume:129 |g year:2022 |g pages:295-308 |g extent:14 |
856 | 4 | 0 | |u https://doi.org/10.1016/j.isatra.2022.02.007 |3 Volltext |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SYSFLAG_U | ||
912 | |a SSG-OLC-PHA | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_105 | ||
936 | b | k | |a 35.00 |j Chemie: Allgemeines |q VZ |
951 | |a AR | ||
952 | |d 129 |j 2022 |h 295-308 |g 14 |
author_variant |
k w kw |
---|---|
matchkey_str |
wangkemuchaoxu:2022----:snhooserigoatrrtcerlewrsnsnhoosrge |
hierarchy_sort_str |
2022transfer abstract |
bklnumber |
35.00 |
publishDate |
2022 |
allfields |
10.1016/j.isatra.2022.02.007 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001923.pica (DE-627)ELV059137274 (ELSEVIER)S0019-0578(22)00066-0 DE-627 ger DE-627 rakwb eng 540 VZ 660 VZ 540 VZ 35.00 bkl Wang, Ke verfasserin aut Asynchronous learning for actor–critic neural networks and synchronous triggering for multiplayer system 2022transfer abstract 14 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier In this paper, based on actor–critic neural network structure and reinforcement learning scheme, a novel asynchronous learning algorithm with event communication is developed, so as to solve Nash equilibrium of multiplayer nonzero-sum differential game in an adaptive fashion. From the point of optimal control view, each player or local controller wants to minimize the individual infinite-time cost function by finding an optimal policy. In this novel learning framework, each player consists of one critic and one actor, and implements distributed asynchronous policy iteration to optimize decision-making process. In addition, communication burden between the system and players is effectively reduced by setting up a central event generator. Critic network executes fast updates by gradient-descent adaption while actor network gives event-induced updates using the gradient projection. The closed-loop asymptotic stability is ensured along with uniform ultimate convergence. Then, the effectiveness of the proposed algorithm is substantiated on a four-player nonlinear system, revealing that it can significantly reduce sampling numbers without impairing learning accuracy. Finally, by leveraging nonzero-sum game idea, the proposed learning scheme is also applied to solve the lateral-directional stability of a linear aircraft system, and is further extended to a nonlinear vehicle system for achieving adaptive cruise control. In this paper, based on actor–critic neural network structure and reinforcement learning scheme, a novel asynchronous learning algorithm with event communication is developed, so as to solve Nash equilibrium of multiplayer nonzero-sum differential game in an adaptive fashion. From the point of optimal control view, each player or local controller wants to minimize the individual infinite-time cost function by finding an optimal policy. In this novel learning framework, each player consists of one critic and one actor, and implements distributed asynchronous policy iteration to optimize decision-making process. In addition, communication burden between the system and players is effectively reduced by setting up a central event generator. Critic network executes fast updates by gradient-descent adaption while actor network gives event-induced updates using the gradient projection. The closed-loop asymptotic stability is ensured along with uniform ultimate convergence. Then, the effectiveness of the proposed algorithm is substantiated on a four-player nonlinear system, revealing that it can significantly reduce sampling numbers without impairing learning accuracy. Finally, by leveraging nonzero-sum game idea, the proposed learning scheme is also applied to solve the lateral-directional stability of a linear aircraft system, and is further extended to a nonlinear vehicle system for achieving adaptive cruise control. Event-triggered communication Elsevier Actor–critic Elsevier Nonzero-sum differential game Elsevier Neural network Elsevier Asynchronous learning Elsevier Synchronous triggering Elsevier Mu, Chaoxu oth Enthalten in Elsevier Selective extraction, structural characterisation and antifungal activity assessment of napins from an industrial rapeseed meal 2012 the science and engineering of measurement and automation Amsterdam [u.a.] (DE-627)ELV011067004 volume:129 year:2022 pages:295-308 extent:14 https://doi.org/10.1016/j.isatra.2022.02.007 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_22 GBV_ILN_40 GBV_ILN_105 35.00 Chemie: Allgemeines VZ AR 129 2022 295-308 14 |
spelling |
10.1016/j.isatra.2022.02.007 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001923.pica (DE-627)ELV059137274 (ELSEVIER)S0019-0578(22)00066-0 DE-627 ger DE-627 rakwb eng 540 VZ 660 VZ 540 VZ 35.00 bkl Wang, Ke verfasserin aut Asynchronous learning for actor–critic neural networks and synchronous triggering for multiplayer system 2022transfer abstract 14 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier In this paper, based on actor–critic neural network structure and reinforcement learning scheme, a novel asynchronous learning algorithm with event communication is developed, so as to solve Nash equilibrium of multiplayer nonzero-sum differential game in an adaptive fashion. From the point of optimal control view, each player or local controller wants to minimize the individual infinite-time cost function by finding an optimal policy. In this novel learning framework, each player consists of one critic and one actor, and implements distributed asynchronous policy iteration to optimize decision-making process. In addition, communication burden between the system and players is effectively reduced by setting up a central event generator. Critic network executes fast updates by gradient-descent adaption while actor network gives event-induced updates using the gradient projection. The closed-loop asymptotic stability is ensured along with uniform ultimate convergence. Then, the effectiveness of the proposed algorithm is substantiated on a four-player nonlinear system, revealing that it can significantly reduce sampling numbers without impairing learning accuracy. Finally, by leveraging nonzero-sum game idea, the proposed learning scheme is also applied to solve the lateral-directional stability of a linear aircraft system, and is further extended to a nonlinear vehicle system for achieving adaptive cruise control. In this paper, based on actor–critic neural network structure and reinforcement learning scheme, a novel asynchronous learning algorithm with event communication is developed, so as to solve Nash equilibrium of multiplayer nonzero-sum differential game in an adaptive fashion. From the point of optimal control view, each player or local controller wants to minimize the individual infinite-time cost function by finding an optimal policy. In this novel learning framework, each player consists of one critic and one actor, and implements distributed asynchronous policy iteration to optimize decision-making process. In addition, communication burden between the system and players is effectively reduced by setting up a central event generator. Critic network executes fast updates by gradient-descent adaption while actor network gives event-induced updates using the gradient projection. The closed-loop asymptotic stability is ensured along with uniform ultimate convergence. Then, the effectiveness of the proposed algorithm is substantiated on a four-player nonlinear system, revealing that it can significantly reduce sampling numbers without impairing learning accuracy. Finally, by leveraging nonzero-sum game idea, the proposed learning scheme is also applied to solve the lateral-directional stability of a linear aircraft system, and is further extended to a nonlinear vehicle system for achieving adaptive cruise control. Event-triggered communication Elsevier Actor–critic Elsevier Nonzero-sum differential game Elsevier Neural network Elsevier Asynchronous learning Elsevier Synchronous triggering Elsevier Mu, Chaoxu oth Enthalten in Elsevier Selective extraction, structural characterisation and antifungal activity assessment of napins from an industrial rapeseed meal 2012 the science and engineering of measurement and automation Amsterdam [u.a.] (DE-627)ELV011067004 volume:129 year:2022 pages:295-308 extent:14 https://doi.org/10.1016/j.isatra.2022.02.007 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_22 GBV_ILN_40 GBV_ILN_105 35.00 Chemie: Allgemeines VZ AR 129 2022 295-308 14 |
allfields_unstemmed |
10.1016/j.isatra.2022.02.007 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001923.pica (DE-627)ELV059137274 (ELSEVIER)S0019-0578(22)00066-0 DE-627 ger DE-627 rakwb eng 540 VZ 660 VZ 540 VZ 35.00 bkl Wang, Ke verfasserin aut Asynchronous learning for actor–critic neural networks and synchronous triggering for multiplayer system 2022transfer abstract 14 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier In this paper, based on actor–critic neural network structure and reinforcement learning scheme, a novel asynchronous learning algorithm with event communication is developed, so as to solve Nash equilibrium of multiplayer nonzero-sum differential game in an adaptive fashion. From the point of optimal control view, each player or local controller wants to minimize the individual infinite-time cost function by finding an optimal policy. In this novel learning framework, each player consists of one critic and one actor, and implements distributed asynchronous policy iteration to optimize decision-making process. In addition, communication burden between the system and players is effectively reduced by setting up a central event generator. Critic network executes fast updates by gradient-descent adaption while actor network gives event-induced updates using the gradient projection. The closed-loop asymptotic stability is ensured along with uniform ultimate convergence. Then, the effectiveness of the proposed algorithm is substantiated on a four-player nonlinear system, revealing that it can significantly reduce sampling numbers without impairing learning accuracy. Finally, by leveraging nonzero-sum game idea, the proposed learning scheme is also applied to solve the lateral-directional stability of a linear aircraft system, and is further extended to a nonlinear vehicle system for achieving adaptive cruise control. In this paper, based on actor–critic neural network structure and reinforcement learning scheme, a novel asynchronous learning algorithm with event communication is developed, so as to solve Nash equilibrium of multiplayer nonzero-sum differential game in an adaptive fashion. From the point of optimal control view, each player or local controller wants to minimize the individual infinite-time cost function by finding an optimal policy. In this novel learning framework, each player consists of one critic and one actor, and implements distributed asynchronous policy iteration to optimize decision-making process. In addition, communication burden between the system and players is effectively reduced by setting up a central event generator. Critic network executes fast updates by gradient-descent adaption while actor network gives event-induced updates using the gradient projection. The closed-loop asymptotic stability is ensured along with uniform ultimate convergence. Then, the effectiveness of the proposed algorithm is substantiated on a four-player nonlinear system, revealing that it can significantly reduce sampling numbers without impairing learning accuracy. Finally, by leveraging nonzero-sum game idea, the proposed learning scheme is also applied to solve the lateral-directional stability of a linear aircraft system, and is further extended to a nonlinear vehicle system for achieving adaptive cruise control. Event-triggered communication Elsevier Actor–critic Elsevier Nonzero-sum differential game Elsevier Neural network Elsevier Asynchronous learning Elsevier Synchronous triggering Elsevier Mu, Chaoxu oth Enthalten in Elsevier Selective extraction, structural characterisation and antifungal activity assessment of napins from an industrial rapeseed meal 2012 the science and engineering of measurement and automation Amsterdam [u.a.] (DE-627)ELV011067004 volume:129 year:2022 pages:295-308 extent:14 https://doi.org/10.1016/j.isatra.2022.02.007 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_22 GBV_ILN_40 GBV_ILN_105 35.00 Chemie: Allgemeines VZ AR 129 2022 295-308 14 |
allfieldsGer |
10.1016/j.isatra.2022.02.007 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001923.pica (DE-627)ELV059137274 (ELSEVIER)S0019-0578(22)00066-0 DE-627 ger DE-627 rakwb eng 540 VZ 660 VZ 540 VZ 35.00 bkl Wang, Ke verfasserin aut Asynchronous learning for actor–critic neural networks and synchronous triggering for multiplayer system 2022transfer abstract 14 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier In this paper, based on actor–critic neural network structure and reinforcement learning scheme, a novel asynchronous learning algorithm with event communication is developed, so as to solve Nash equilibrium of multiplayer nonzero-sum differential game in an adaptive fashion. From the point of optimal control view, each player or local controller wants to minimize the individual infinite-time cost function by finding an optimal policy. In this novel learning framework, each player consists of one critic and one actor, and implements distributed asynchronous policy iteration to optimize decision-making process. In addition, communication burden between the system and players is effectively reduced by setting up a central event generator. Critic network executes fast updates by gradient-descent adaption while actor network gives event-induced updates using the gradient projection. The closed-loop asymptotic stability is ensured along with uniform ultimate convergence. Then, the effectiveness of the proposed algorithm is substantiated on a four-player nonlinear system, revealing that it can significantly reduce sampling numbers without impairing learning accuracy. Finally, by leveraging nonzero-sum game idea, the proposed learning scheme is also applied to solve the lateral-directional stability of a linear aircraft system, and is further extended to a nonlinear vehicle system for achieving adaptive cruise control. In this paper, based on actor–critic neural network structure and reinforcement learning scheme, a novel asynchronous learning algorithm with event communication is developed, so as to solve Nash equilibrium of multiplayer nonzero-sum differential game in an adaptive fashion. From the point of optimal control view, each player or local controller wants to minimize the individual infinite-time cost function by finding an optimal policy. In this novel learning framework, each player consists of one critic and one actor, and implements distributed asynchronous policy iteration to optimize decision-making process. In addition, communication burden between the system and players is effectively reduced by setting up a central event generator. Critic network executes fast updates by gradient-descent adaption while actor network gives event-induced updates using the gradient projection. The closed-loop asymptotic stability is ensured along with uniform ultimate convergence. Then, the effectiveness of the proposed algorithm is substantiated on a four-player nonlinear system, revealing that it can significantly reduce sampling numbers without impairing learning accuracy. Finally, by leveraging nonzero-sum game idea, the proposed learning scheme is also applied to solve the lateral-directional stability of a linear aircraft system, and is further extended to a nonlinear vehicle system for achieving adaptive cruise control. Event-triggered communication Elsevier Actor–critic Elsevier Nonzero-sum differential game Elsevier Neural network Elsevier Asynchronous learning Elsevier Synchronous triggering Elsevier Mu, Chaoxu oth Enthalten in Elsevier Selective extraction, structural characterisation and antifungal activity assessment of napins from an industrial rapeseed meal 2012 the science and engineering of measurement and automation Amsterdam [u.a.] (DE-627)ELV011067004 volume:129 year:2022 pages:295-308 extent:14 https://doi.org/10.1016/j.isatra.2022.02.007 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_22 GBV_ILN_40 GBV_ILN_105 35.00 Chemie: Allgemeines VZ AR 129 2022 295-308 14 |
allfieldsSound |
10.1016/j.isatra.2022.02.007 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001923.pica (DE-627)ELV059137274 (ELSEVIER)S0019-0578(22)00066-0 DE-627 ger DE-627 rakwb eng 540 VZ 660 VZ 540 VZ 35.00 bkl Wang, Ke verfasserin aut Asynchronous learning for actor–critic neural networks and synchronous triggering for multiplayer system 2022transfer abstract 14 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier In this paper, based on actor–critic neural network structure and reinforcement learning scheme, a novel asynchronous learning algorithm with event communication is developed, so as to solve Nash equilibrium of multiplayer nonzero-sum differential game in an adaptive fashion. From the point of optimal control view, each player or local controller wants to minimize the individual infinite-time cost function by finding an optimal policy. In this novel learning framework, each player consists of one critic and one actor, and implements distributed asynchronous policy iteration to optimize decision-making process. In addition, communication burden between the system and players is effectively reduced by setting up a central event generator. Critic network executes fast updates by gradient-descent adaption while actor network gives event-induced updates using the gradient projection. The closed-loop asymptotic stability is ensured along with uniform ultimate convergence. Then, the effectiveness of the proposed algorithm is substantiated on a four-player nonlinear system, revealing that it can significantly reduce sampling numbers without impairing learning accuracy. Finally, by leveraging nonzero-sum game idea, the proposed learning scheme is also applied to solve the lateral-directional stability of a linear aircraft system, and is further extended to a nonlinear vehicle system for achieving adaptive cruise control. In this paper, based on actor–critic neural network structure and reinforcement learning scheme, a novel asynchronous learning algorithm with event communication is developed, so as to solve Nash equilibrium of multiplayer nonzero-sum differential game in an adaptive fashion. From the point of optimal control view, each player or local controller wants to minimize the individual infinite-time cost function by finding an optimal policy. In this novel learning framework, each player consists of one critic and one actor, and implements distributed asynchronous policy iteration to optimize decision-making process. In addition, communication burden between the system and players is effectively reduced by setting up a central event generator. Critic network executes fast updates by gradient-descent adaption while actor network gives event-induced updates using the gradient projection. The closed-loop asymptotic stability is ensured along with uniform ultimate convergence. Then, the effectiveness of the proposed algorithm is substantiated on a four-player nonlinear system, revealing that it can significantly reduce sampling numbers without impairing learning accuracy. Finally, by leveraging nonzero-sum game idea, the proposed learning scheme is also applied to solve the lateral-directional stability of a linear aircraft system, and is further extended to a nonlinear vehicle system for achieving adaptive cruise control. Event-triggered communication Elsevier Actor–critic Elsevier Nonzero-sum differential game Elsevier Neural network Elsevier Asynchronous learning Elsevier Synchronous triggering Elsevier Mu, Chaoxu oth Enthalten in Elsevier Selective extraction, structural characterisation and antifungal activity assessment of napins from an industrial rapeseed meal 2012 the science and engineering of measurement and automation Amsterdam [u.a.] (DE-627)ELV011067004 volume:129 year:2022 pages:295-308 extent:14 https://doi.org/10.1016/j.isatra.2022.02.007 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_22 GBV_ILN_40 GBV_ILN_105 35.00 Chemie: Allgemeines VZ AR 129 2022 295-308 14 |
language |
English |
source |
Enthalten in Selective extraction, structural characterisation and antifungal activity assessment of napins from an industrial rapeseed meal Amsterdam [u.a.] volume:129 year:2022 pages:295-308 extent:14 |
sourceStr |
Enthalten in Selective extraction, structural characterisation and antifungal activity assessment of napins from an industrial rapeseed meal Amsterdam [u.a.] volume:129 year:2022 pages:295-308 extent:14 |
format_phy_str_mv |
Article |
bklname |
Chemie: Allgemeines |
institution |
findex.gbv.de |
topic_facet |
Event-triggered communication Actor–critic Nonzero-sum differential game Neural network Asynchronous learning Synchronous triggering |
dewey-raw |
540 |
isfreeaccess_bool |
false |
container_title |
Selective extraction, structural characterisation and antifungal activity assessment of napins from an industrial rapeseed meal |
authorswithroles_txt_mv |
Wang, Ke @@aut@@ Mu, Chaoxu @@oth@@ |
publishDateDaySort_date |
2022-01-01T00:00:00Z |
hierarchy_top_id |
ELV011067004 |
dewey-sort |
3540 |
id |
ELV059137274 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV059137274</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626052306.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">221103s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.isatra.2022.02.007</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001923.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV059137274</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0019-0578(22)00066-0</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">540</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">660</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">540</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">35.00</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Wang, Ke</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Asynchronous learning for actor–critic neural networks and synchronous triggering for multiplayer system</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022transfer abstract</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">14</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">In this paper, based on actor–critic neural network structure and reinforcement learning scheme, a novel asynchronous learning algorithm with event communication is developed, so as to solve Nash equilibrium of multiplayer nonzero-sum differential game in an adaptive fashion. From the point of optimal control view, each player or local controller wants to minimize the individual infinite-time cost function by finding an optimal policy. In this novel learning framework, each player consists of one critic and one actor, and implements distributed asynchronous policy iteration to optimize decision-making process. In addition, communication burden between the system and players is effectively reduced by setting up a central event generator. Critic network executes fast updates by gradient-descent adaption while actor network gives event-induced updates using the gradient projection. The closed-loop asymptotic stability is ensured along with uniform ultimate convergence. Then, the effectiveness of the proposed algorithm is substantiated on a four-player nonlinear system, revealing that it can significantly reduce sampling numbers without impairing learning accuracy. Finally, by leveraging nonzero-sum game idea, the proposed learning scheme is also applied to solve the lateral-directional stability of a linear aircraft system, and is further extended to a nonlinear vehicle system for achieving adaptive cruise control.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">In this paper, based on actor–critic neural network structure and reinforcement learning scheme, a novel asynchronous learning algorithm with event communication is developed, so as to solve Nash equilibrium of multiplayer nonzero-sum differential game in an adaptive fashion. From the point of optimal control view, each player or local controller wants to minimize the individual infinite-time cost function by finding an optimal policy. In this novel learning framework, each player consists of one critic and one actor, and implements distributed asynchronous policy iteration to optimize decision-making process. In addition, communication burden between the system and players is effectively reduced by setting up a central event generator. Critic network executes fast updates by gradient-descent adaption while actor network gives event-induced updates using the gradient projection. The closed-loop asymptotic stability is ensured along with uniform ultimate convergence. Then, the effectiveness of the proposed algorithm is substantiated on a four-player nonlinear system, revealing that it can significantly reduce sampling numbers without impairing learning accuracy. Finally, by leveraging nonzero-sum game idea, the proposed learning scheme is also applied to solve the lateral-directional stability of a linear aircraft system, and is further extended to a nonlinear vehicle system for achieving adaptive cruise control.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Event-triggered communication</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Actor–critic</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Nonzero-sum differential game</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Neural network</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Asynchronous learning</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Synchronous triggering</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Mu, Chaoxu</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Elsevier</subfield><subfield code="t">Selective extraction, structural characterisation and antifungal activity assessment of napins from an industrial rapeseed meal</subfield><subfield code="d">2012</subfield><subfield code="d">the science and engineering of measurement and automation</subfield><subfield code="g">Amsterdam [u.a.]</subfield><subfield code="w">(DE-627)ELV011067004</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:129</subfield><subfield code="g">year:2022</subfield><subfield code="g">pages:295-308</subfield><subfield code="g">extent:14</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.isatra.2022.02.007</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">35.00</subfield><subfield code="j">Chemie: Allgemeines</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">129</subfield><subfield code="j">2022</subfield><subfield code="h">295-308</subfield><subfield code="g">14</subfield></datafield></record></collection>
|
author |
Wang, Ke |
spellingShingle |
Wang, Ke ddc 540 ddc 660 bkl 35.00 Elsevier Event-triggered communication Elsevier Actor–critic Elsevier Nonzero-sum differential game Elsevier Neural network Elsevier Asynchronous learning Elsevier Synchronous triggering Asynchronous learning for actor–critic neural networks and synchronous triggering for multiplayer system |
authorStr |
Wang, Ke |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)ELV011067004 |
format |
electronic Article |
dewey-ones |
540 - Chemistry & allied sciences 660 - Chemical engineering |
delete_txt_mv |
keep |
author_role |
aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
topic_title |
540 VZ 660 VZ 35.00 bkl Asynchronous learning for actor–critic neural networks and synchronous triggering for multiplayer system Event-triggered communication Elsevier Actor–critic Elsevier Nonzero-sum differential game Elsevier Neural network Elsevier Asynchronous learning Elsevier Synchronous triggering Elsevier |
topic |
ddc 540 ddc 660 bkl 35.00 Elsevier Event-triggered communication Elsevier Actor–critic Elsevier Nonzero-sum differential game Elsevier Neural network Elsevier Asynchronous learning Elsevier Synchronous triggering |
topic_unstemmed |
ddc 540 ddc 660 bkl 35.00 Elsevier Event-triggered communication Elsevier Actor–critic Elsevier Nonzero-sum differential game Elsevier Neural network Elsevier Asynchronous learning Elsevier Synchronous triggering |
topic_browse |
ddc 540 ddc 660 bkl 35.00 Elsevier Event-triggered communication Elsevier Actor–critic Elsevier Nonzero-sum differential game Elsevier Neural network Elsevier Asynchronous learning Elsevier Synchronous triggering |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
zu |
author2_variant |
c m cm |
hierarchy_parent_title |
Selective extraction, structural characterisation and antifungal activity assessment of napins from an industrial rapeseed meal |
hierarchy_parent_id |
ELV011067004 |
dewey-tens |
540 - Chemistry 660 - Chemical engineering |
hierarchy_top_title |
Selective extraction, structural characterisation and antifungal activity assessment of napins from an industrial rapeseed meal |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)ELV011067004 |
title |
Asynchronous learning for actor–critic neural networks and synchronous triggering for multiplayer system |
ctrlnum |
(DE-627)ELV059137274 (ELSEVIER)S0019-0578(22)00066-0 |
title_full |
Asynchronous learning for actor–critic neural networks and synchronous triggering for multiplayer system |
author_sort |
Wang, Ke |
journal |
Selective extraction, structural characterisation and antifungal activity assessment of napins from an industrial rapeseed meal |
journalStr |
Selective extraction, structural characterisation and antifungal activity assessment of napins from an industrial rapeseed meal |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
500 - Science 600 - Technology |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
zzz |
container_start_page |
295 |
author_browse |
Wang, Ke |
container_volume |
129 |
physical |
14 |
class |
540 VZ 660 VZ 35.00 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Wang, Ke |
doi_str_mv |
10.1016/j.isatra.2022.02.007 |
dewey-full |
540 660 |
title_sort |
asynchronous learning for actor–critic neural networks and synchronous triggering for multiplayer system |
title_auth |
Asynchronous learning for actor–critic neural networks and synchronous triggering for multiplayer system |
abstract |
In this paper, based on actor–critic neural network structure and reinforcement learning scheme, a novel asynchronous learning algorithm with event communication is developed, so as to solve Nash equilibrium of multiplayer nonzero-sum differential game in an adaptive fashion. From the point of optimal control view, each player or local controller wants to minimize the individual infinite-time cost function by finding an optimal policy. In this novel learning framework, each player consists of one critic and one actor, and implements distributed asynchronous policy iteration to optimize decision-making process. In addition, communication burden between the system and players is effectively reduced by setting up a central event generator. Critic network executes fast updates by gradient-descent adaption while actor network gives event-induced updates using the gradient projection. The closed-loop asymptotic stability is ensured along with uniform ultimate convergence. Then, the effectiveness of the proposed algorithm is substantiated on a four-player nonlinear system, revealing that it can significantly reduce sampling numbers without impairing learning accuracy. Finally, by leveraging nonzero-sum game idea, the proposed learning scheme is also applied to solve the lateral-directional stability of a linear aircraft system, and is further extended to a nonlinear vehicle system for achieving adaptive cruise control. |
abstractGer |
In this paper, based on actor–critic neural network structure and reinforcement learning scheme, a novel asynchronous learning algorithm with event communication is developed, so as to solve Nash equilibrium of multiplayer nonzero-sum differential game in an adaptive fashion. From the point of optimal control view, each player or local controller wants to minimize the individual infinite-time cost function by finding an optimal policy. In this novel learning framework, each player consists of one critic and one actor, and implements distributed asynchronous policy iteration to optimize decision-making process. In addition, communication burden between the system and players is effectively reduced by setting up a central event generator. Critic network executes fast updates by gradient-descent adaption while actor network gives event-induced updates using the gradient projection. The closed-loop asymptotic stability is ensured along with uniform ultimate convergence. Then, the effectiveness of the proposed algorithm is substantiated on a four-player nonlinear system, revealing that it can significantly reduce sampling numbers without impairing learning accuracy. Finally, by leveraging nonzero-sum game idea, the proposed learning scheme is also applied to solve the lateral-directional stability of a linear aircraft system, and is further extended to a nonlinear vehicle system for achieving adaptive cruise control. |
abstract_unstemmed |
In this paper, based on actor–critic neural network structure and reinforcement learning scheme, a novel asynchronous learning algorithm with event communication is developed, so as to solve Nash equilibrium of multiplayer nonzero-sum differential game in an adaptive fashion. From the point of optimal control view, each player or local controller wants to minimize the individual infinite-time cost function by finding an optimal policy. In this novel learning framework, each player consists of one critic and one actor, and implements distributed asynchronous policy iteration to optimize decision-making process. In addition, communication burden between the system and players is effectively reduced by setting up a central event generator. Critic network executes fast updates by gradient-descent adaption while actor network gives event-induced updates using the gradient projection. The closed-loop asymptotic stability is ensured along with uniform ultimate convergence. Then, the effectiveness of the proposed algorithm is substantiated on a four-player nonlinear system, revealing that it can significantly reduce sampling numbers without impairing learning accuracy. Finally, by leveraging nonzero-sum game idea, the proposed learning scheme is also applied to solve the lateral-directional stability of a linear aircraft system, and is further extended to a nonlinear vehicle system for achieving adaptive cruise control. |
collection_details |
GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_22 GBV_ILN_40 GBV_ILN_105 |
title_short |
Asynchronous learning for actor–critic neural networks and synchronous triggering for multiplayer system |
url |
https://doi.org/10.1016/j.isatra.2022.02.007 |
remote_bool |
true |
author2 |
Mu, Chaoxu |
author2Str |
Mu, Chaoxu |
ppnlink |
ELV011067004 |
mediatype_str_mv |
z |
isOA_txt |
false |
hochschulschrift_bool |
false |
author2_role |
oth |
doi_str |
10.1016/j.isatra.2022.02.007 |
up_date |
2024-07-06T21:05:04.154Z |
_version_ |
1803865192299233280 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV059137274</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626052306.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">221103s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.isatra.2022.02.007</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001923.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV059137274</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0019-0578(22)00066-0</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">540</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">660</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">540</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">35.00</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Wang, Ke</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Asynchronous learning for actor–critic neural networks and synchronous triggering for multiplayer system</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022transfer abstract</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">14</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">In this paper, based on actor–critic neural network structure and reinforcement learning scheme, a novel asynchronous learning algorithm with event communication is developed, so as to solve Nash equilibrium of multiplayer nonzero-sum differential game in an adaptive fashion. From the point of optimal control view, each player or local controller wants to minimize the individual infinite-time cost function by finding an optimal policy. In this novel learning framework, each player consists of one critic and one actor, and implements distributed asynchronous policy iteration to optimize decision-making process. In addition, communication burden between the system and players is effectively reduced by setting up a central event generator. Critic network executes fast updates by gradient-descent adaption while actor network gives event-induced updates using the gradient projection. The closed-loop asymptotic stability is ensured along with uniform ultimate convergence. Then, the effectiveness of the proposed algorithm is substantiated on a four-player nonlinear system, revealing that it can significantly reduce sampling numbers without impairing learning accuracy. Finally, by leveraging nonzero-sum game idea, the proposed learning scheme is also applied to solve the lateral-directional stability of a linear aircraft system, and is further extended to a nonlinear vehicle system for achieving adaptive cruise control.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">In this paper, based on actor–critic neural network structure and reinforcement learning scheme, a novel asynchronous learning algorithm with event communication is developed, so as to solve Nash equilibrium of multiplayer nonzero-sum differential game in an adaptive fashion. From the point of optimal control view, each player or local controller wants to minimize the individual infinite-time cost function by finding an optimal policy. In this novel learning framework, each player consists of one critic and one actor, and implements distributed asynchronous policy iteration to optimize decision-making process. In addition, communication burden between the system and players is effectively reduced by setting up a central event generator. Critic network executes fast updates by gradient-descent adaption while actor network gives event-induced updates using the gradient projection. The closed-loop asymptotic stability is ensured along with uniform ultimate convergence. Then, the effectiveness of the proposed algorithm is substantiated on a four-player nonlinear system, revealing that it can significantly reduce sampling numbers without impairing learning accuracy. Finally, by leveraging nonzero-sum game idea, the proposed learning scheme is also applied to solve the lateral-directional stability of a linear aircraft system, and is further extended to a nonlinear vehicle system for achieving adaptive cruise control.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Event-triggered communication</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Actor–critic</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Nonzero-sum differential game</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Neural network</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Asynchronous learning</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Synchronous triggering</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Mu, Chaoxu</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Elsevier</subfield><subfield code="t">Selective extraction, structural characterisation and antifungal activity assessment of napins from an industrial rapeseed meal</subfield><subfield code="d">2012</subfield><subfield code="d">the science and engineering of measurement and automation</subfield><subfield code="g">Amsterdam [u.a.]</subfield><subfield code="w">(DE-627)ELV011067004</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:129</subfield><subfield code="g">year:2022</subfield><subfield code="g">pages:295-308</subfield><subfield code="g">extent:14</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.isatra.2022.02.007</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">35.00</subfield><subfield code="j">Chemie: Allgemeines</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">129</subfield><subfield code="j">2022</subfield><subfield code="h">295-308</subfield><subfield code="g">14</subfield></datafield></record></collection>
|
score |
7.3997 |