Compensated Motion and Position Estimation of a Cable-driven Parallel Robot Based on Deep Reinforcement Learning
Abstract Unlike conventional rigid-link parallel robots, cable-driven parallel robots (CDPRs) have distinct advantages, including lower inertia, higher payload-to-weight ratio, cost-efficiency, and larger workspaces. However, because of the complexity of the cable configuration and redundant actuati...
Ausführliche Beschreibung
Autor*in: |
Chen, Huaishu [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2023 |
---|
Schlagwörter: |
---|
Anmerkung: |
© ICROS, KIEE and Springer 2023 |
---|
Übergeordnetes Werk: |
Enthalten in: International Journal of Control, Automation and Systems - Institute of Control, Robotics and Systems and The Korean Institute of Electrical Engineers, 2009, 21(2023), 11 vom: Nov., Seite 3507-3518 |
---|---|
Übergeordnetes Werk: |
volume:21 ; year:2023 ; number:11 ; month:11 ; pages:3507-3518 |
Links: |
---|
DOI / URN: |
10.1007/s12555-023-0342-6 |
---|
Katalog-ID: |
SPR05363375X |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | SPR05363375X | ||
003 | DE-627 | ||
005 | 20231105064627.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231105s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1007/s12555-023-0342-6 |2 doi | |
035 | |a (DE-627)SPR05363375X | ||
035 | |a (SPR)s12555-023-0342-6-e | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Chen, Huaishu |e verfasserin |4 aut | |
245 | 1 | 0 | |a Compensated Motion and Position Estimation of a Cable-driven Parallel Robot Based on Deep Reinforcement Learning |
264 | 1 | |c 2023 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
500 | |a © ICROS, KIEE and Springer 2023 | ||
520 | |a Abstract Unlike conventional rigid-link parallel robots, cable-driven parallel robots (CDPRs) have distinct advantages, including lower inertia, higher payload-to-weight ratio, cost-efficiency, and larger workspaces. However, because of the complexity of the cable configuration and redundant actuation, model-based forward kinematics and motion control necessitate high effort and computation. This study overcomes these challenges by introducing deep reinforcement learning (DRL) into the cable robot and achieves compensated motion control by estimating the actual position of the end-effector. We used a random behavior strategy on a CDPR to explore the environment, collect data, and train neural networks. We then apply the trained network to the CDPR and verify its efficacy. We also addressed the problem of asynchronous state observation and action execution by delaying the action execution time in one cycle and adding this action to be executed to match the motion control command. Finally, we implemented the proposed control method to a high payload cable robot system and verified the feasibility through simulations and experiments. The results demonstrate that the end-effector position estimation accuracy can be improved compared with the numerical model-based forward kinematics solution and the position control error can be reduced compared with the conventional open-loop control and the open-loop control with tension distribution form. | ||
650 | 4 | |a Cable-driven parallel robot |7 (dpeaa)DE-He213 | |
650 | 4 | |a deep reinforcement learning |7 (dpeaa)DE-He213 | |
650 | 4 | |a motion control |7 (dpeaa)DE-He213 | |
700 | 1 | |a Kim, Min-Cheol |4 aut | |
700 | 1 | |a Ko, Yeongoh |4 aut | |
700 | 1 | |a Kim, Chang-Sei |0 (orcid)0000-0003-4532-2006 |4 aut | |
773 | 0 | 8 | |i Enthalten in |t International Journal of Control, Automation and Systems |d Institute of Control, Robotics and Systems and The Korean Institute of Electrical Engineers, 2009 |g 21(2023), 11 vom: Nov., Seite 3507-3518 |w (DE-627)SPR026303256 |7 nnns |
773 | 1 | 8 | |g volume:21 |g year:2023 |g number:11 |g month:11 |g pages:3507-3518 |
856 | 4 | 0 | |u https://dx.doi.org/10.1007/s12555-023-0342-6 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_SPRINGER | ||
912 | |a GBV_ILN_21 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_72 | ||
912 | |a GBV_ILN_181 | ||
912 | |a GBV_ILN_496 | ||
912 | |a GBV_ILN_2002 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2008 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2060 | ||
912 | |a GBV_ILN_2470 | ||
951 | |a AR | ||
952 | |d 21 |j 2023 |e 11 |c 11 |h 3507-3518 |
author_variant |
h c hc m c k mck y k yk c s k csk |
---|---|
matchkey_str |
chenhuaishukimmincheolkoyeongohkimchangs:2023----:opnaemtoadoiinsiainfcbervnaalloobsdn |
hierarchy_sort_str |
2023 |
publishDate |
2023 |
allfields |
10.1007/s12555-023-0342-6 doi (DE-627)SPR05363375X (SPR)s12555-023-0342-6-e DE-627 ger DE-627 rakwb eng Chen, Huaishu verfasserin aut Compensated Motion and Position Estimation of a Cable-driven Parallel Robot Based on Deep Reinforcement Learning 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © ICROS, KIEE and Springer 2023 Abstract Unlike conventional rigid-link parallel robots, cable-driven parallel robots (CDPRs) have distinct advantages, including lower inertia, higher payload-to-weight ratio, cost-efficiency, and larger workspaces. However, because of the complexity of the cable configuration and redundant actuation, model-based forward kinematics and motion control necessitate high effort and computation. This study overcomes these challenges by introducing deep reinforcement learning (DRL) into the cable robot and achieves compensated motion control by estimating the actual position of the end-effector. We used a random behavior strategy on a CDPR to explore the environment, collect data, and train neural networks. We then apply the trained network to the CDPR and verify its efficacy. We also addressed the problem of asynchronous state observation and action execution by delaying the action execution time in one cycle and adding this action to be executed to match the motion control command. Finally, we implemented the proposed control method to a high payload cable robot system and verified the feasibility through simulations and experiments. The results demonstrate that the end-effector position estimation accuracy can be improved compared with the numerical model-based forward kinematics solution and the position control error can be reduced compared with the conventional open-loop control and the open-loop control with tension distribution form. Cable-driven parallel robot (dpeaa)DE-He213 deep reinforcement learning (dpeaa)DE-He213 motion control (dpeaa)DE-He213 Kim, Min-Cheol aut Ko, Yeongoh aut Kim, Chang-Sei (orcid)0000-0003-4532-2006 aut Enthalten in International Journal of Control, Automation and Systems Institute of Control, Robotics and Systems and The Korean Institute of Electrical Engineers, 2009 21(2023), 11 vom: Nov., Seite 3507-3518 (DE-627)SPR026303256 nnns volume:21 year:2023 number:11 month:11 pages:3507-3518 https://dx.doi.org/10.1007/s12555-023-0342-6 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_21 GBV_ILN_24 GBV_ILN_72 GBV_ILN_181 GBV_ILN_496 GBV_ILN_2002 GBV_ILN_2003 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2060 GBV_ILN_2470 AR 21 2023 11 11 3507-3518 |
spelling |
10.1007/s12555-023-0342-6 doi (DE-627)SPR05363375X (SPR)s12555-023-0342-6-e DE-627 ger DE-627 rakwb eng Chen, Huaishu verfasserin aut Compensated Motion and Position Estimation of a Cable-driven Parallel Robot Based on Deep Reinforcement Learning 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © ICROS, KIEE and Springer 2023 Abstract Unlike conventional rigid-link parallel robots, cable-driven parallel robots (CDPRs) have distinct advantages, including lower inertia, higher payload-to-weight ratio, cost-efficiency, and larger workspaces. However, because of the complexity of the cable configuration and redundant actuation, model-based forward kinematics and motion control necessitate high effort and computation. This study overcomes these challenges by introducing deep reinforcement learning (DRL) into the cable robot and achieves compensated motion control by estimating the actual position of the end-effector. We used a random behavior strategy on a CDPR to explore the environment, collect data, and train neural networks. We then apply the trained network to the CDPR and verify its efficacy. We also addressed the problem of asynchronous state observation and action execution by delaying the action execution time in one cycle and adding this action to be executed to match the motion control command. Finally, we implemented the proposed control method to a high payload cable robot system and verified the feasibility through simulations and experiments. The results demonstrate that the end-effector position estimation accuracy can be improved compared with the numerical model-based forward kinematics solution and the position control error can be reduced compared with the conventional open-loop control and the open-loop control with tension distribution form. Cable-driven parallel robot (dpeaa)DE-He213 deep reinforcement learning (dpeaa)DE-He213 motion control (dpeaa)DE-He213 Kim, Min-Cheol aut Ko, Yeongoh aut Kim, Chang-Sei (orcid)0000-0003-4532-2006 aut Enthalten in International Journal of Control, Automation and Systems Institute of Control, Robotics and Systems and The Korean Institute of Electrical Engineers, 2009 21(2023), 11 vom: Nov., Seite 3507-3518 (DE-627)SPR026303256 nnns volume:21 year:2023 number:11 month:11 pages:3507-3518 https://dx.doi.org/10.1007/s12555-023-0342-6 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_21 GBV_ILN_24 GBV_ILN_72 GBV_ILN_181 GBV_ILN_496 GBV_ILN_2002 GBV_ILN_2003 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2060 GBV_ILN_2470 AR 21 2023 11 11 3507-3518 |
allfields_unstemmed |
10.1007/s12555-023-0342-6 doi (DE-627)SPR05363375X (SPR)s12555-023-0342-6-e DE-627 ger DE-627 rakwb eng Chen, Huaishu verfasserin aut Compensated Motion and Position Estimation of a Cable-driven Parallel Robot Based on Deep Reinforcement Learning 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © ICROS, KIEE and Springer 2023 Abstract Unlike conventional rigid-link parallel robots, cable-driven parallel robots (CDPRs) have distinct advantages, including lower inertia, higher payload-to-weight ratio, cost-efficiency, and larger workspaces. However, because of the complexity of the cable configuration and redundant actuation, model-based forward kinematics and motion control necessitate high effort and computation. This study overcomes these challenges by introducing deep reinforcement learning (DRL) into the cable robot and achieves compensated motion control by estimating the actual position of the end-effector. We used a random behavior strategy on a CDPR to explore the environment, collect data, and train neural networks. We then apply the trained network to the CDPR and verify its efficacy. We also addressed the problem of asynchronous state observation and action execution by delaying the action execution time in one cycle and adding this action to be executed to match the motion control command. Finally, we implemented the proposed control method to a high payload cable robot system and verified the feasibility through simulations and experiments. The results demonstrate that the end-effector position estimation accuracy can be improved compared with the numerical model-based forward kinematics solution and the position control error can be reduced compared with the conventional open-loop control and the open-loop control with tension distribution form. Cable-driven parallel robot (dpeaa)DE-He213 deep reinforcement learning (dpeaa)DE-He213 motion control (dpeaa)DE-He213 Kim, Min-Cheol aut Ko, Yeongoh aut Kim, Chang-Sei (orcid)0000-0003-4532-2006 aut Enthalten in International Journal of Control, Automation and Systems Institute of Control, Robotics and Systems and The Korean Institute of Electrical Engineers, 2009 21(2023), 11 vom: Nov., Seite 3507-3518 (DE-627)SPR026303256 nnns volume:21 year:2023 number:11 month:11 pages:3507-3518 https://dx.doi.org/10.1007/s12555-023-0342-6 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_21 GBV_ILN_24 GBV_ILN_72 GBV_ILN_181 GBV_ILN_496 GBV_ILN_2002 GBV_ILN_2003 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2060 GBV_ILN_2470 AR 21 2023 11 11 3507-3518 |
allfieldsGer |
10.1007/s12555-023-0342-6 doi (DE-627)SPR05363375X (SPR)s12555-023-0342-6-e DE-627 ger DE-627 rakwb eng Chen, Huaishu verfasserin aut Compensated Motion and Position Estimation of a Cable-driven Parallel Robot Based on Deep Reinforcement Learning 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © ICROS, KIEE and Springer 2023 Abstract Unlike conventional rigid-link parallel robots, cable-driven parallel robots (CDPRs) have distinct advantages, including lower inertia, higher payload-to-weight ratio, cost-efficiency, and larger workspaces. However, because of the complexity of the cable configuration and redundant actuation, model-based forward kinematics and motion control necessitate high effort and computation. This study overcomes these challenges by introducing deep reinforcement learning (DRL) into the cable robot and achieves compensated motion control by estimating the actual position of the end-effector. We used a random behavior strategy on a CDPR to explore the environment, collect data, and train neural networks. We then apply the trained network to the CDPR and verify its efficacy. We also addressed the problem of asynchronous state observation and action execution by delaying the action execution time in one cycle and adding this action to be executed to match the motion control command. Finally, we implemented the proposed control method to a high payload cable robot system and verified the feasibility through simulations and experiments. The results demonstrate that the end-effector position estimation accuracy can be improved compared with the numerical model-based forward kinematics solution and the position control error can be reduced compared with the conventional open-loop control and the open-loop control with tension distribution form. Cable-driven parallel robot (dpeaa)DE-He213 deep reinforcement learning (dpeaa)DE-He213 motion control (dpeaa)DE-He213 Kim, Min-Cheol aut Ko, Yeongoh aut Kim, Chang-Sei (orcid)0000-0003-4532-2006 aut Enthalten in International Journal of Control, Automation and Systems Institute of Control, Robotics and Systems and The Korean Institute of Electrical Engineers, 2009 21(2023), 11 vom: Nov., Seite 3507-3518 (DE-627)SPR026303256 nnns volume:21 year:2023 number:11 month:11 pages:3507-3518 https://dx.doi.org/10.1007/s12555-023-0342-6 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_21 GBV_ILN_24 GBV_ILN_72 GBV_ILN_181 GBV_ILN_496 GBV_ILN_2002 GBV_ILN_2003 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2060 GBV_ILN_2470 AR 21 2023 11 11 3507-3518 |
allfieldsSound |
10.1007/s12555-023-0342-6 doi (DE-627)SPR05363375X (SPR)s12555-023-0342-6-e DE-627 ger DE-627 rakwb eng Chen, Huaishu verfasserin aut Compensated Motion and Position Estimation of a Cable-driven Parallel Robot Based on Deep Reinforcement Learning 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © ICROS, KIEE and Springer 2023 Abstract Unlike conventional rigid-link parallel robots, cable-driven parallel robots (CDPRs) have distinct advantages, including lower inertia, higher payload-to-weight ratio, cost-efficiency, and larger workspaces. However, because of the complexity of the cable configuration and redundant actuation, model-based forward kinematics and motion control necessitate high effort and computation. This study overcomes these challenges by introducing deep reinforcement learning (DRL) into the cable robot and achieves compensated motion control by estimating the actual position of the end-effector. We used a random behavior strategy on a CDPR to explore the environment, collect data, and train neural networks. We then apply the trained network to the CDPR and verify its efficacy. We also addressed the problem of asynchronous state observation and action execution by delaying the action execution time in one cycle and adding this action to be executed to match the motion control command. Finally, we implemented the proposed control method to a high payload cable robot system and verified the feasibility through simulations and experiments. The results demonstrate that the end-effector position estimation accuracy can be improved compared with the numerical model-based forward kinematics solution and the position control error can be reduced compared with the conventional open-loop control and the open-loop control with tension distribution form. Cable-driven parallel robot (dpeaa)DE-He213 deep reinforcement learning (dpeaa)DE-He213 motion control (dpeaa)DE-He213 Kim, Min-Cheol aut Ko, Yeongoh aut Kim, Chang-Sei (orcid)0000-0003-4532-2006 aut Enthalten in International Journal of Control, Automation and Systems Institute of Control, Robotics and Systems and The Korean Institute of Electrical Engineers, 2009 21(2023), 11 vom: Nov., Seite 3507-3518 (DE-627)SPR026303256 nnns volume:21 year:2023 number:11 month:11 pages:3507-3518 https://dx.doi.org/10.1007/s12555-023-0342-6 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_21 GBV_ILN_24 GBV_ILN_72 GBV_ILN_181 GBV_ILN_496 GBV_ILN_2002 GBV_ILN_2003 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2060 GBV_ILN_2470 AR 21 2023 11 11 3507-3518 |
language |
English |
source |
Enthalten in International Journal of Control, Automation and Systems 21(2023), 11 vom: Nov., Seite 3507-3518 volume:21 year:2023 number:11 month:11 pages:3507-3518 |
sourceStr |
Enthalten in International Journal of Control, Automation and Systems 21(2023), 11 vom: Nov., Seite 3507-3518 volume:21 year:2023 number:11 month:11 pages:3507-3518 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Cable-driven parallel robot deep reinforcement learning motion control |
isfreeaccess_bool |
false |
container_title |
International Journal of Control, Automation and Systems |
authorswithroles_txt_mv |
Chen, Huaishu @@aut@@ Kim, Min-Cheol @@aut@@ Ko, Yeongoh @@aut@@ Kim, Chang-Sei @@aut@@ |
publishDateDaySort_date |
2023-11-01T00:00:00Z |
hierarchy_top_id |
SPR026303256 |
id |
SPR05363375X |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">SPR05363375X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20231105064627.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">231105s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s12555-023-0342-6</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR05363375X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)s12555-023-0342-6-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Chen, Huaishu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Compensated Motion and Position Estimation of a Cable-driven Parallel Robot Based on Deep Reinforcement Learning</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© ICROS, KIEE and Springer 2023</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Unlike conventional rigid-link parallel robots, cable-driven parallel robots (CDPRs) have distinct advantages, including lower inertia, higher payload-to-weight ratio, cost-efficiency, and larger workspaces. However, because of the complexity of the cable configuration and redundant actuation, model-based forward kinematics and motion control necessitate high effort and computation. This study overcomes these challenges by introducing deep reinforcement learning (DRL) into the cable robot and achieves compensated motion control by estimating the actual position of the end-effector. We used a random behavior strategy on a CDPR to explore the environment, collect data, and train neural networks. We then apply the trained network to the CDPR and verify its efficacy. We also addressed the problem of asynchronous state observation and action execution by delaying the action execution time in one cycle and adding this action to be executed to match the motion control command. Finally, we implemented the proposed control method to a high payload cable robot system and verified the feasibility through simulations and experiments. The results demonstrate that the end-effector position estimation accuracy can be improved compared with the numerical model-based forward kinematics solution and the position control error can be reduced compared with the conventional open-loop control and the open-loop control with tension distribution form.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Cable-driven parallel robot</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">deep reinforcement learning</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">motion control</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kim, Min-Cheol</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Ko, Yeongoh</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kim, Chang-Sei</subfield><subfield code="0">(orcid)0000-0003-4532-2006</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">International Journal of Control, Automation and Systems</subfield><subfield code="d">Institute of Control, Robotics and Systems and The Korean Institute of Electrical Engineers, 2009</subfield><subfield code="g">21(2023), 11 vom: Nov., Seite 3507-3518</subfield><subfield code="w">(DE-627)SPR026303256</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:21</subfield><subfield code="g">year:2023</subfield><subfield code="g">number:11</subfield><subfield code="g">month:11</subfield><subfield code="g">pages:3507-3518</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.1007/s12555-023-0342-6</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_21</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_72</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_181</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_496</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2002</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2060</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">21</subfield><subfield code="j">2023</subfield><subfield code="e">11</subfield><subfield code="c">11</subfield><subfield code="h">3507-3518</subfield></datafield></record></collection>
|
author |
Chen, Huaishu |
spellingShingle |
Chen, Huaishu misc Cable-driven parallel robot misc deep reinforcement learning misc motion control Compensated Motion and Position Estimation of a Cable-driven Parallel Robot Based on Deep Reinforcement Learning |
authorStr |
Chen, Huaishu |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)SPR026303256 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut |
collection |
springer |
remote_str |
true |
illustrated |
Not Illustrated |
topic_title |
Compensated Motion and Position Estimation of a Cable-driven Parallel Robot Based on Deep Reinforcement Learning Cable-driven parallel robot (dpeaa)DE-He213 deep reinforcement learning (dpeaa)DE-He213 motion control (dpeaa)DE-He213 |
topic |
misc Cable-driven parallel robot misc deep reinforcement learning misc motion control |
topic_unstemmed |
misc Cable-driven parallel robot misc deep reinforcement learning misc motion control |
topic_browse |
misc Cable-driven parallel robot misc deep reinforcement learning misc motion control |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
International Journal of Control, Automation and Systems |
hierarchy_parent_id |
SPR026303256 |
hierarchy_top_title |
International Journal of Control, Automation and Systems |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)SPR026303256 |
title |
Compensated Motion and Position Estimation of a Cable-driven Parallel Robot Based on Deep Reinforcement Learning |
ctrlnum |
(DE-627)SPR05363375X (SPR)s12555-023-0342-6-e |
title_full |
Compensated Motion and Position Estimation of a Cable-driven Parallel Robot Based on Deep Reinforcement Learning |
author_sort |
Chen, Huaishu |
journal |
International Journal of Control, Automation and Systems |
journalStr |
International Journal of Control, Automation and Systems |
lang_code |
eng |
isOA_bool |
false |
recordtype |
marc |
publishDateSort |
2023 |
contenttype_str_mv |
txt |
container_start_page |
3507 |
author_browse |
Chen, Huaishu Kim, Min-Cheol Ko, Yeongoh Kim, Chang-Sei |
container_volume |
21 |
format_se |
Elektronische Aufsätze |
author-letter |
Chen, Huaishu |
doi_str_mv |
10.1007/s12555-023-0342-6 |
normlink |
(ORCID)0000-0003-4532-2006 |
normlink_prefix_str_mv |
(orcid)0000-0003-4532-2006 |
title_sort |
compensated motion and position estimation of a cable-driven parallel robot based on deep reinforcement learning |
title_auth |
Compensated Motion and Position Estimation of a Cable-driven Parallel Robot Based on Deep Reinforcement Learning |
abstract |
Abstract Unlike conventional rigid-link parallel robots, cable-driven parallel robots (CDPRs) have distinct advantages, including lower inertia, higher payload-to-weight ratio, cost-efficiency, and larger workspaces. However, because of the complexity of the cable configuration and redundant actuation, model-based forward kinematics and motion control necessitate high effort and computation. This study overcomes these challenges by introducing deep reinforcement learning (DRL) into the cable robot and achieves compensated motion control by estimating the actual position of the end-effector. We used a random behavior strategy on a CDPR to explore the environment, collect data, and train neural networks. We then apply the trained network to the CDPR and verify its efficacy. We also addressed the problem of asynchronous state observation and action execution by delaying the action execution time in one cycle and adding this action to be executed to match the motion control command. Finally, we implemented the proposed control method to a high payload cable robot system and verified the feasibility through simulations and experiments. The results demonstrate that the end-effector position estimation accuracy can be improved compared with the numerical model-based forward kinematics solution and the position control error can be reduced compared with the conventional open-loop control and the open-loop control with tension distribution form. © ICROS, KIEE and Springer 2023 |
abstractGer |
Abstract Unlike conventional rigid-link parallel robots, cable-driven parallel robots (CDPRs) have distinct advantages, including lower inertia, higher payload-to-weight ratio, cost-efficiency, and larger workspaces. However, because of the complexity of the cable configuration and redundant actuation, model-based forward kinematics and motion control necessitate high effort and computation. This study overcomes these challenges by introducing deep reinforcement learning (DRL) into the cable robot and achieves compensated motion control by estimating the actual position of the end-effector. We used a random behavior strategy on a CDPR to explore the environment, collect data, and train neural networks. We then apply the trained network to the CDPR and verify its efficacy. We also addressed the problem of asynchronous state observation and action execution by delaying the action execution time in one cycle and adding this action to be executed to match the motion control command. Finally, we implemented the proposed control method to a high payload cable robot system and verified the feasibility through simulations and experiments. The results demonstrate that the end-effector position estimation accuracy can be improved compared with the numerical model-based forward kinematics solution and the position control error can be reduced compared with the conventional open-loop control and the open-loop control with tension distribution form. © ICROS, KIEE and Springer 2023 |
abstract_unstemmed |
Abstract Unlike conventional rigid-link parallel robots, cable-driven parallel robots (CDPRs) have distinct advantages, including lower inertia, higher payload-to-weight ratio, cost-efficiency, and larger workspaces. However, because of the complexity of the cable configuration and redundant actuation, model-based forward kinematics and motion control necessitate high effort and computation. This study overcomes these challenges by introducing deep reinforcement learning (DRL) into the cable robot and achieves compensated motion control by estimating the actual position of the end-effector. We used a random behavior strategy on a CDPR to explore the environment, collect data, and train neural networks. We then apply the trained network to the CDPR and verify its efficacy. We also addressed the problem of asynchronous state observation and action execution by delaying the action execution time in one cycle and adding this action to be executed to match the motion control command. Finally, we implemented the proposed control method to a high payload cable robot system and verified the feasibility through simulations and experiments. The results demonstrate that the end-effector position estimation accuracy can be improved compared with the numerical model-based forward kinematics solution and the position control error can be reduced compared with the conventional open-loop control and the open-loop control with tension distribution form. © ICROS, KIEE and Springer 2023 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_21 GBV_ILN_24 GBV_ILN_72 GBV_ILN_181 GBV_ILN_496 GBV_ILN_2002 GBV_ILN_2003 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2060 GBV_ILN_2470 |
container_issue |
11 |
title_short |
Compensated Motion and Position Estimation of a Cable-driven Parallel Robot Based on Deep Reinforcement Learning |
url |
https://dx.doi.org/10.1007/s12555-023-0342-6 |
remote_bool |
true |
author2 |
Kim, Min-Cheol Ko, Yeongoh Kim, Chang-Sei |
author2Str |
Kim, Min-Cheol Ko, Yeongoh Kim, Chang-Sei |
ppnlink |
SPR026303256 |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s12555-023-0342-6 |
up_date |
2024-07-03T20:56:02.653Z |
_version_ |
1803592833596129280 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">SPR05363375X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20231105064627.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">231105s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s12555-023-0342-6</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR05363375X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)s12555-023-0342-6-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Chen, Huaishu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Compensated Motion and Position Estimation of a Cable-driven Parallel Robot Based on Deep Reinforcement Learning</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© ICROS, KIEE and Springer 2023</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Unlike conventional rigid-link parallel robots, cable-driven parallel robots (CDPRs) have distinct advantages, including lower inertia, higher payload-to-weight ratio, cost-efficiency, and larger workspaces. However, because of the complexity of the cable configuration and redundant actuation, model-based forward kinematics and motion control necessitate high effort and computation. This study overcomes these challenges by introducing deep reinforcement learning (DRL) into the cable robot and achieves compensated motion control by estimating the actual position of the end-effector. We used a random behavior strategy on a CDPR to explore the environment, collect data, and train neural networks. We then apply the trained network to the CDPR and verify its efficacy. We also addressed the problem of asynchronous state observation and action execution by delaying the action execution time in one cycle and adding this action to be executed to match the motion control command. Finally, we implemented the proposed control method to a high payload cable robot system and verified the feasibility through simulations and experiments. The results demonstrate that the end-effector position estimation accuracy can be improved compared with the numerical model-based forward kinematics solution and the position control error can be reduced compared with the conventional open-loop control and the open-loop control with tension distribution form.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Cable-driven parallel robot</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">deep reinforcement learning</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">motion control</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kim, Min-Cheol</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Ko, Yeongoh</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kim, Chang-Sei</subfield><subfield code="0">(orcid)0000-0003-4532-2006</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">International Journal of Control, Automation and Systems</subfield><subfield code="d">Institute of Control, Robotics and Systems and The Korean Institute of Electrical Engineers, 2009</subfield><subfield code="g">21(2023), 11 vom: Nov., Seite 3507-3518</subfield><subfield code="w">(DE-627)SPR026303256</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:21</subfield><subfield code="g">year:2023</subfield><subfield code="g">number:11</subfield><subfield code="g">month:11</subfield><subfield code="g">pages:3507-3518</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.1007/s12555-023-0342-6</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_21</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_72</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_181</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_496</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2002</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2060</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">21</subfield><subfield code="j">2023</subfield><subfield code="e">11</subfield><subfield code="c">11</subfield><subfield code="h">3507-3518</subfield></datafield></record></collection>
|
score |
7.400342 |