Experiments with cooperative robots that can detect object’s shape, color and size to perform tasks in industrial workplaces
Abstract Automation and human-robot collaboration are increasing in modern workplaces such as industrial manufacturing. Nowadays, humans rely heavily on advanced robotic devices to perform tasks quickly and accurately. Modern robots with computer vision and artificial intelligence are gaining attent...
Ausführliche Beschreibung
Autor*in: |
Titu, Md Fahim Shahoriar [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2023 |
---|
Schlagwörter: |
---|
Anmerkung: |
© The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
---|
Übergeordnetes Werk: |
Enthalten in: International journal of intelligent robotics and applications - Springer Nature Singapore, 2017, 8(2023), 1 vom: 25. Nov., Seite 179-192 |
---|---|
Übergeordnetes Werk: |
volume:8 ; year:2023 ; number:1 ; day:25 ; month:11 ; pages:179-192 |
Links: |
---|
DOI / URN: |
10.1007/s41315-023-00305-y |
---|
Katalog-ID: |
SPR05520256X |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | SPR05520256X | ||
003 | DE-627 | ||
005 | 20240319064724.0 | ||
007 | cr uuu---uuuuu | ||
008 | 240319s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1007/s41315-023-00305-y |2 doi | |
035 | |a (DE-627)SPR05520256X | ||
035 | |a (SPR)s41315-023-00305-y-e | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 620 |a 670 |q VZ |
082 | 0 | 4 | |a 620 |a 670 |q VZ |
100 | 1 | |a Titu, Md Fahim Shahoriar |e verfasserin |4 aut | |
245 | 1 | 0 | |a Experiments with cooperative robots that can detect object’s shape, color and size to perform tasks in industrial workplaces |
264 | 1 | |c 2023 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
500 | |a © The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. | ||
520 | |a Abstract Automation and human-robot collaboration are increasing in modern workplaces such as industrial manufacturing. Nowadays, humans rely heavily on advanced robotic devices to perform tasks quickly and accurately. Modern robots with computer vision and artificial intelligence are gaining attention and popularity rapidly. This paper demonstrates how a robot can automatically detect an object’s shape, color, and size using computer vision techniques and act based on information feedback. In this work, a powerful computational model for a robot has been developed that distinguishes an object’s shape, size, and color in real time with high accuracy. Then it can integrate a robotic arm to pick a specific object. A dataset of 6558 images of various monochromatic objects has been developed, containing three colors against a white background and five shapes for the research. The designed system for detection has achieved 99.8% success in an object’s shape detection. Also, the system demonstrated 100% success in the object’s color and size detection with the OpenCV image processing framework. On the other hand, the prototype robotic system based on Raspberry Pi-4B has achieved 80.7% accuracy for geometrical shape detection and 81.07%, and 59.77% accuracy for color recognition and distance measurement, respectively. Moreover, the system guided a robotic arm to pick up the object based on its color and shape with a mean response time of 19 seconds. The idea is to simulate a workplace environment where a worker will ask the robotic systems to perform a task on a specific object. Our robotic system can accurately identify the object’s attributes (e.g., 100%) and is able to perform the task reliably (81%). However, reliability can be improved by using a more powerful computing system, such as the robotic prototype. The article’s contribution is to use a cutting-edge computer vision technique to detect and categorize objects with the help of a small private dataset to shorten the training duration and enable the suggested system to adapt to components that may be needed for creating a new industrial product in a shorter period. The source code and images of the collected dataset can be found at: https://github.com/TituShahoriar/cse499B_Hardware_Proposed_System. | ||
650 | 4 | |a Automation |7 (dpeaa)DE-He213 | |
650 | 4 | |a Bounding box regression |7 (dpeaa)DE-He213 | |
650 | 4 | |a Computer vision |7 (dpeaa)DE-He213 | |
650 | 4 | |a Detection |7 (dpeaa)DE-He213 | |
650 | 4 | |a Image processing |7 (dpeaa)DE-He213 | |
650 | 4 | |a Library |7 (dpeaa)DE-He213 | |
650 | 4 | |a OpenCV |7 (dpeaa)DE-He213 | |
650 | 4 | |a Robot |7 (dpeaa)DE-He213 | |
700 | 1 | |a Haque, S. M. Rezwanul |4 aut | |
700 | 1 | |a Islam, Rifad |4 aut | |
700 | 1 | |a Hossain, Akram |4 aut | |
700 | 1 | |a Qayum, Mohammad Abdul |4 aut | |
700 | 1 | |a Khan, Riasat |0 (orcid)0000-0002-5429-2235 |4 aut | |
773 | 0 | 8 | |i Enthalten in |t International journal of intelligent robotics and applications |d Springer Nature Singapore, 2017 |g 8(2023), 1 vom: 25. Nov., Seite 179-192 |w (DE-627)876318030 |w (DE-600)2879694-9 |x 2366-598X |7 nnns |
773 | 1 | 8 | |g volume:8 |g year:2023 |g number:1 |g day:25 |g month:11 |g pages:179-192 |
856 | 4 | 0 | |u https://dx.doi.org/10.1007/s41315-023-00305-y |z lizenzpflichtig |3 Volltext |
912 | |a SYSFLAG_0 | ||
912 | |a GBV_SPRINGER | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_120 | ||
912 | |a GBV_ILN_138 | ||
912 | |a GBV_ILN_150 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_152 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_171 | ||
912 | |a GBV_ILN_187 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_250 | ||
912 | |a GBV_ILN_266 | ||
912 | |a GBV_ILN_281 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_636 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2001 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2006 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2008 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2031 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2037 | ||
912 | |a GBV_ILN_2038 | ||
912 | |a GBV_ILN_2039 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2057 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2065 | ||
912 | |a GBV_ILN_2068 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2093 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2107 | ||
912 | |a GBV_ILN_2108 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2113 | ||
912 | |a GBV_ILN_2118 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2144 | ||
912 | |a GBV_ILN_2147 | ||
912 | |a GBV_ILN_2148 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2188 | ||
912 | |a GBV_ILN_2190 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2446 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2472 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_2522 | ||
912 | |a GBV_ILN_2548 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4046 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4246 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4328 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4336 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 8 |j 2023 |e 1 |b 25 |c 11 |h 179-192 |
author_variant |
m f s t mfs mfst s m r h smr smrh r i ri a h ah m a q ma maq r k rk |
---|---|
matchkey_str |
article:2366598X:2023----::xeietwtcoeaieooshtadtcojcshpclrnsztprom |
hierarchy_sort_str |
2023 |
publishDate |
2023 |
allfields |
10.1007/s41315-023-00305-y doi (DE-627)SPR05520256X (SPR)s41315-023-00305-y-e DE-627 ger DE-627 rakwb eng 620 670 VZ 620 670 VZ Titu, Md Fahim Shahoriar verfasserin aut Experiments with cooperative robots that can detect object’s shape, color and size to perform tasks in industrial workplaces 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Abstract Automation and human-robot collaboration are increasing in modern workplaces such as industrial manufacturing. Nowadays, humans rely heavily on advanced robotic devices to perform tasks quickly and accurately. Modern robots with computer vision and artificial intelligence are gaining attention and popularity rapidly. This paper demonstrates how a robot can automatically detect an object’s shape, color, and size using computer vision techniques and act based on information feedback. In this work, a powerful computational model for a robot has been developed that distinguishes an object’s shape, size, and color in real time with high accuracy. Then it can integrate a robotic arm to pick a specific object. A dataset of 6558 images of various monochromatic objects has been developed, containing three colors against a white background and five shapes for the research. The designed system for detection has achieved 99.8% success in an object’s shape detection. Also, the system demonstrated 100% success in the object’s color and size detection with the OpenCV image processing framework. On the other hand, the prototype robotic system based on Raspberry Pi-4B has achieved 80.7% accuracy for geometrical shape detection and 81.07%, and 59.77% accuracy for color recognition and distance measurement, respectively. Moreover, the system guided a robotic arm to pick up the object based on its color and shape with a mean response time of 19 seconds. The idea is to simulate a workplace environment where a worker will ask the robotic systems to perform a task on a specific object. Our robotic system can accurately identify the object’s attributes (e.g., 100%) and is able to perform the task reliably (81%). However, reliability can be improved by using a more powerful computing system, such as the robotic prototype. The article’s contribution is to use a cutting-edge computer vision technique to detect and categorize objects with the help of a small private dataset to shorten the training duration and enable the suggested system to adapt to components that may be needed for creating a new industrial product in a shorter period. The source code and images of the collected dataset can be found at: https://github.com/TituShahoriar/cse499B_Hardware_Proposed_System. Automation (dpeaa)DE-He213 Bounding box regression (dpeaa)DE-He213 Computer vision (dpeaa)DE-He213 Detection (dpeaa)DE-He213 Image processing (dpeaa)DE-He213 Library (dpeaa)DE-He213 OpenCV (dpeaa)DE-He213 Robot (dpeaa)DE-He213 Haque, S. M. Rezwanul aut Islam, Rifad aut Hossain, Akram aut Qayum, Mohammad Abdul aut Khan, Riasat (orcid)0000-0002-5429-2235 aut Enthalten in International journal of intelligent robotics and applications Springer Nature Singapore, 2017 8(2023), 1 vom: 25. Nov., Seite 179-192 (DE-627)876318030 (DE-600)2879694-9 2366-598X nnns volume:8 year:2023 number:1 day:25 month:11 pages:179-192 https://dx.doi.org/10.1007/s41315-023-00305-y lizenzpflichtig Volltext SYSFLAG_0 GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_266 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 8 2023 1 25 11 179-192 |
spelling |
10.1007/s41315-023-00305-y doi (DE-627)SPR05520256X (SPR)s41315-023-00305-y-e DE-627 ger DE-627 rakwb eng 620 670 VZ 620 670 VZ Titu, Md Fahim Shahoriar verfasserin aut Experiments with cooperative robots that can detect object’s shape, color and size to perform tasks in industrial workplaces 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Abstract Automation and human-robot collaboration are increasing in modern workplaces such as industrial manufacturing. Nowadays, humans rely heavily on advanced robotic devices to perform tasks quickly and accurately. Modern robots with computer vision and artificial intelligence are gaining attention and popularity rapidly. This paper demonstrates how a robot can automatically detect an object’s shape, color, and size using computer vision techniques and act based on information feedback. In this work, a powerful computational model for a robot has been developed that distinguishes an object’s shape, size, and color in real time with high accuracy. Then it can integrate a robotic arm to pick a specific object. A dataset of 6558 images of various monochromatic objects has been developed, containing three colors against a white background and five shapes for the research. The designed system for detection has achieved 99.8% success in an object’s shape detection. Also, the system demonstrated 100% success in the object’s color and size detection with the OpenCV image processing framework. On the other hand, the prototype robotic system based on Raspberry Pi-4B has achieved 80.7% accuracy for geometrical shape detection and 81.07%, and 59.77% accuracy for color recognition and distance measurement, respectively. Moreover, the system guided a robotic arm to pick up the object based on its color and shape with a mean response time of 19 seconds. The idea is to simulate a workplace environment where a worker will ask the robotic systems to perform a task on a specific object. Our robotic system can accurately identify the object’s attributes (e.g., 100%) and is able to perform the task reliably (81%). However, reliability can be improved by using a more powerful computing system, such as the robotic prototype. The article’s contribution is to use a cutting-edge computer vision technique to detect and categorize objects with the help of a small private dataset to shorten the training duration and enable the suggested system to adapt to components that may be needed for creating a new industrial product in a shorter period. The source code and images of the collected dataset can be found at: https://github.com/TituShahoriar/cse499B_Hardware_Proposed_System. Automation (dpeaa)DE-He213 Bounding box regression (dpeaa)DE-He213 Computer vision (dpeaa)DE-He213 Detection (dpeaa)DE-He213 Image processing (dpeaa)DE-He213 Library (dpeaa)DE-He213 OpenCV (dpeaa)DE-He213 Robot (dpeaa)DE-He213 Haque, S. M. Rezwanul aut Islam, Rifad aut Hossain, Akram aut Qayum, Mohammad Abdul aut Khan, Riasat (orcid)0000-0002-5429-2235 aut Enthalten in International journal of intelligent robotics and applications Springer Nature Singapore, 2017 8(2023), 1 vom: 25. Nov., Seite 179-192 (DE-627)876318030 (DE-600)2879694-9 2366-598X nnns volume:8 year:2023 number:1 day:25 month:11 pages:179-192 https://dx.doi.org/10.1007/s41315-023-00305-y lizenzpflichtig Volltext SYSFLAG_0 GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_266 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 8 2023 1 25 11 179-192 |
allfields_unstemmed |
10.1007/s41315-023-00305-y doi (DE-627)SPR05520256X (SPR)s41315-023-00305-y-e DE-627 ger DE-627 rakwb eng 620 670 VZ 620 670 VZ Titu, Md Fahim Shahoriar verfasserin aut Experiments with cooperative robots that can detect object’s shape, color and size to perform tasks in industrial workplaces 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Abstract Automation and human-robot collaboration are increasing in modern workplaces such as industrial manufacturing. Nowadays, humans rely heavily on advanced robotic devices to perform tasks quickly and accurately. Modern robots with computer vision and artificial intelligence are gaining attention and popularity rapidly. This paper demonstrates how a robot can automatically detect an object’s shape, color, and size using computer vision techniques and act based on information feedback. In this work, a powerful computational model for a robot has been developed that distinguishes an object’s shape, size, and color in real time with high accuracy. Then it can integrate a robotic arm to pick a specific object. A dataset of 6558 images of various monochromatic objects has been developed, containing three colors against a white background and five shapes for the research. The designed system for detection has achieved 99.8% success in an object’s shape detection. Also, the system demonstrated 100% success in the object’s color and size detection with the OpenCV image processing framework. On the other hand, the prototype robotic system based on Raspberry Pi-4B has achieved 80.7% accuracy for geometrical shape detection and 81.07%, and 59.77% accuracy for color recognition and distance measurement, respectively. Moreover, the system guided a robotic arm to pick up the object based on its color and shape with a mean response time of 19 seconds. The idea is to simulate a workplace environment where a worker will ask the robotic systems to perform a task on a specific object. Our robotic system can accurately identify the object’s attributes (e.g., 100%) and is able to perform the task reliably (81%). However, reliability can be improved by using a more powerful computing system, such as the robotic prototype. The article’s contribution is to use a cutting-edge computer vision technique to detect and categorize objects with the help of a small private dataset to shorten the training duration and enable the suggested system to adapt to components that may be needed for creating a new industrial product in a shorter period. The source code and images of the collected dataset can be found at: https://github.com/TituShahoriar/cse499B_Hardware_Proposed_System. Automation (dpeaa)DE-He213 Bounding box regression (dpeaa)DE-He213 Computer vision (dpeaa)DE-He213 Detection (dpeaa)DE-He213 Image processing (dpeaa)DE-He213 Library (dpeaa)DE-He213 OpenCV (dpeaa)DE-He213 Robot (dpeaa)DE-He213 Haque, S. M. Rezwanul aut Islam, Rifad aut Hossain, Akram aut Qayum, Mohammad Abdul aut Khan, Riasat (orcid)0000-0002-5429-2235 aut Enthalten in International journal of intelligent robotics and applications Springer Nature Singapore, 2017 8(2023), 1 vom: 25. Nov., Seite 179-192 (DE-627)876318030 (DE-600)2879694-9 2366-598X nnns volume:8 year:2023 number:1 day:25 month:11 pages:179-192 https://dx.doi.org/10.1007/s41315-023-00305-y lizenzpflichtig Volltext SYSFLAG_0 GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_266 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 8 2023 1 25 11 179-192 |
allfieldsGer |
10.1007/s41315-023-00305-y doi (DE-627)SPR05520256X (SPR)s41315-023-00305-y-e DE-627 ger DE-627 rakwb eng 620 670 VZ 620 670 VZ Titu, Md Fahim Shahoriar verfasserin aut Experiments with cooperative robots that can detect object’s shape, color and size to perform tasks in industrial workplaces 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Abstract Automation and human-robot collaboration are increasing in modern workplaces such as industrial manufacturing. Nowadays, humans rely heavily on advanced robotic devices to perform tasks quickly and accurately. Modern robots with computer vision and artificial intelligence are gaining attention and popularity rapidly. This paper demonstrates how a robot can automatically detect an object’s shape, color, and size using computer vision techniques and act based on information feedback. In this work, a powerful computational model for a robot has been developed that distinguishes an object’s shape, size, and color in real time with high accuracy. Then it can integrate a robotic arm to pick a specific object. A dataset of 6558 images of various monochromatic objects has been developed, containing three colors against a white background and five shapes for the research. The designed system for detection has achieved 99.8% success in an object’s shape detection. Also, the system demonstrated 100% success in the object’s color and size detection with the OpenCV image processing framework. On the other hand, the prototype robotic system based on Raspberry Pi-4B has achieved 80.7% accuracy for geometrical shape detection and 81.07%, and 59.77% accuracy for color recognition and distance measurement, respectively. Moreover, the system guided a robotic arm to pick up the object based on its color and shape with a mean response time of 19 seconds. The idea is to simulate a workplace environment where a worker will ask the robotic systems to perform a task on a specific object. Our robotic system can accurately identify the object’s attributes (e.g., 100%) and is able to perform the task reliably (81%). However, reliability can be improved by using a more powerful computing system, such as the robotic prototype. The article’s contribution is to use a cutting-edge computer vision technique to detect and categorize objects with the help of a small private dataset to shorten the training duration and enable the suggested system to adapt to components that may be needed for creating a new industrial product in a shorter period. The source code and images of the collected dataset can be found at: https://github.com/TituShahoriar/cse499B_Hardware_Proposed_System. Automation (dpeaa)DE-He213 Bounding box regression (dpeaa)DE-He213 Computer vision (dpeaa)DE-He213 Detection (dpeaa)DE-He213 Image processing (dpeaa)DE-He213 Library (dpeaa)DE-He213 OpenCV (dpeaa)DE-He213 Robot (dpeaa)DE-He213 Haque, S. M. Rezwanul aut Islam, Rifad aut Hossain, Akram aut Qayum, Mohammad Abdul aut Khan, Riasat (orcid)0000-0002-5429-2235 aut Enthalten in International journal of intelligent robotics and applications Springer Nature Singapore, 2017 8(2023), 1 vom: 25. Nov., Seite 179-192 (DE-627)876318030 (DE-600)2879694-9 2366-598X nnns volume:8 year:2023 number:1 day:25 month:11 pages:179-192 https://dx.doi.org/10.1007/s41315-023-00305-y lizenzpflichtig Volltext SYSFLAG_0 GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_266 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 8 2023 1 25 11 179-192 |
allfieldsSound |
10.1007/s41315-023-00305-y doi (DE-627)SPR05520256X (SPR)s41315-023-00305-y-e DE-627 ger DE-627 rakwb eng 620 670 VZ 620 670 VZ Titu, Md Fahim Shahoriar verfasserin aut Experiments with cooperative robots that can detect object’s shape, color and size to perform tasks in industrial workplaces 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Abstract Automation and human-robot collaboration are increasing in modern workplaces such as industrial manufacturing. Nowadays, humans rely heavily on advanced robotic devices to perform tasks quickly and accurately. Modern robots with computer vision and artificial intelligence are gaining attention and popularity rapidly. This paper demonstrates how a robot can automatically detect an object’s shape, color, and size using computer vision techniques and act based on information feedback. In this work, a powerful computational model for a robot has been developed that distinguishes an object’s shape, size, and color in real time with high accuracy. Then it can integrate a robotic arm to pick a specific object. A dataset of 6558 images of various monochromatic objects has been developed, containing three colors against a white background and five shapes for the research. The designed system for detection has achieved 99.8% success in an object’s shape detection. Also, the system demonstrated 100% success in the object’s color and size detection with the OpenCV image processing framework. On the other hand, the prototype robotic system based on Raspberry Pi-4B has achieved 80.7% accuracy for geometrical shape detection and 81.07%, and 59.77% accuracy for color recognition and distance measurement, respectively. Moreover, the system guided a robotic arm to pick up the object based on its color and shape with a mean response time of 19 seconds. The idea is to simulate a workplace environment where a worker will ask the robotic systems to perform a task on a specific object. Our robotic system can accurately identify the object’s attributes (e.g., 100%) and is able to perform the task reliably (81%). However, reliability can be improved by using a more powerful computing system, such as the robotic prototype. The article’s contribution is to use a cutting-edge computer vision technique to detect and categorize objects with the help of a small private dataset to shorten the training duration and enable the suggested system to adapt to components that may be needed for creating a new industrial product in a shorter period. The source code and images of the collected dataset can be found at: https://github.com/TituShahoriar/cse499B_Hardware_Proposed_System. Automation (dpeaa)DE-He213 Bounding box regression (dpeaa)DE-He213 Computer vision (dpeaa)DE-He213 Detection (dpeaa)DE-He213 Image processing (dpeaa)DE-He213 Library (dpeaa)DE-He213 OpenCV (dpeaa)DE-He213 Robot (dpeaa)DE-He213 Haque, S. M. Rezwanul aut Islam, Rifad aut Hossain, Akram aut Qayum, Mohammad Abdul aut Khan, Riasat (orcid)0000-0002-5429-2235 aut Enthalten in International journal of intelligent robotics and applications Springer Nature Singapore, 2017 8(2023), 1 vom: 25. Nov., Seite 179-192 (DE-627)876318030 (DE-600)2879694-9 2366-598X nnns volume:8 year:2023 number:1 day:25 month:11 pages:179-192 https://dx.doi.org/10.1007/s41315-023-00305-y lizenzpflichtig Volltext SYSFLAG_0 GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_266 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 8 2023 1 25 11 179-192 |
language |
English |
source |
Enthalten in International journal of intelligent robotics and applications 8(2023), 1 vom: 25. Nov., Seite 179-192 volume:8 year:2023 number:1 day:25 month:11 pages:179-192 |
sourceStr |
Enthalten in International journal of intelligent robotics and applications 8(2023), 1 vom: 25. Nov., Seite 179-192 volume:8 year:2023 number:1 day:25 month:11 pages:179-192 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Automation Bounding box regression Computer vision Detection Image processing Library OpenCV Robot |
dewey-raw |
620 |
isfreeaccess_bool |
false |
container_title |
International journal of intelligent robotics and applications |
authorswithroles_txt_mv |
Titu, Md Fahim Shahoriar @@aut@@ Haque, S. M. Rezwanul @@aut@@ Islam, Rifad @@aut@@ Hossain, Akram @@aut@@ Qayum, Mohammad Abdul @@aut@@ Khan, Riasat @@aut@@ |
publishDateDaySort_date |
2023-11-25T00:00:00Z |
hierarchy_top_id |
876318030 |
dewey-sort |
3620 |
id |
SPR05520256X |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">SPR05520256X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240319064724.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">240319s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s41315-023-00305-y</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR05520256X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)s41315-023-00305-y-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">620</subfield><subfield code="a">670</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">620</subfield><subfield code="a">670</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Titu, Md Fahim Shahoriar</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Experiments with cooperative robots that can detect object’s shape, color and size to perform tasks in industrial workplaces</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Automation and human-robot collaboration are increasing in modern workplaces such as industrial manufacturing. Nowadays, humans rely heavily on advanced robotic devices to perform tasks quickly and accurately. Modern robots with computer vision and artificial intelligence are gaining attention and popularity rapidly. This paper demonstrates how a robot can automatically detect an object’s shape, color, and size using computer vision techniques and act based on information feedback. In this work, a powerful computational model for a robot has been developed that distinguishes an object’s shape, size, and color in real time with high accuracy. Then it can integrate a robotic arm to pick a specific object. A dataset of 6558 images of various monochromatic objects has been developed, containing three colors against a white background and five shapes for the research. The designed system for detection has achieved 99.8% success in an object’s shape detection. Also, the system demonstrated 100% success in the object’s color and size detection with the OpenCV image processing framework. On the other hand, the prototype robotic system based on Raspberry Pi-4B has achieved 80.7% accuracy for geometrical shape detection and 81.07%, and 59.77% accuracy for color recognition and distance measurement, respectively. Moreover, the system guided a robotic arm to pick up the object based on its color and shape with a mean response time of 19 seconds. The idea is to simulate a workplace environment where a worker will ask the robotic systems to perform a task on a specific object. Our robotic system can accurately identify the object’s attributes (e.g., 100%) and is able to perform the task reliably (81%). However, reliability can be improved by using a more powerful computing system, such as the robotic prototype. The article’s contribution is to use a cutting-edge computer vision technique to detect and categorize objects with the help of a small private dataset to shorten the training duration and enable the suggested system to adapt to components that may be needed for creating a new industrial product in a shorter period. The source code and images of the collected dataset can be found at: https://github.com/TituShahoriar/cse499B_Hardware_Proposed_System.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Automation</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Bounding box regression</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Computer vision</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Detection</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Image processing</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Library</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">OpenCV</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Robot</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Haque, S. M. Rezwanul</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Islam, Rifad</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Hossain, Akram</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Qayum, Mohammad Abdul</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Khan, Riasat</subfield><subfield code="0">(orcid)0000-0002-5429-2235</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">International journal of intelligent robotics and applications</subfield><subfield code="d">Springer Nature Singapore, 2017</subfield><subfield code="g">8(2023), 1 vom: 25. Nov., Seite 179-192</subfield><subfield code="w">(DE-627)876318030</subfield><subfield code="w">(DE-600)2879694-9</subfield><subfield code="x">2366-598X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:8</subfield><subfield code="g">year:2023</subfield><subfield code="g">number:1</subfield><subfield code="g">day:25</subfield><subfield code="g">month:11</subfield><subfield code="g">pages:179-192</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.1007/s41315-023-00305-y</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_0</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_138</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_266</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_281</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2031</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2039</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2093</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2107</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2188</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2446</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2472</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2548</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4246</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4328</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">8</subfield><subfield code="j">2023</subfield><subfield code="e">1</subfield><subfield code="b">25</subfield><subfield code="c">11</subfield><subfield code="h">179-192</subfield></datafield></record></collection>
|
author |
Titu, Md Fahim Shahoriar |
spellingShingle |
Titu, Md Fahim Shahoriar ddc 620 misc Automation misc Bounding box regression misc Computer vision misc Detection misc Image processing misc Library misc OpenCV misc Robot Experiments with cooperative robots that can detect object’s shape, color and size to perform tasks in industrial workplaces |
authorStr |
Titu, Md Fahim Shahoriar |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)876318030 |
format |
electronic Article |
dewey-ones |
620 - Engineering & allied operations 670 - Manufacturing |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut aut |
collection |
springer |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
2366-598X |
topic_title |
620 670 VZ Experiments with cooperative robots that can detect object’s shape, color and size to perform tasks in industrial workplaces Automation (dpeaa)DE-He213 Bounding box regression (dpeaa)DE-He213 Computer vision (dpeaa)DE-He213 Detection (dpeaa)DE-He213 Image processing (dpeaa)DE-He213 Library (dpeaa)DE-He213 OpenCV (dpeaa)DE-He213 Robot (dpeaa)DE-He213 |
topic |
ddc 620 misc Automation misc Bounding box regression misc Computer vision misc Detection misc Image processing misc Library misc OpenCV misc Robot |
topic_unstemmed |
ddc 620 misc Automation misc Bounding box regression misc Computer vision misc Detection misc Image processing misc Library misc OpenCV misc Robot |
topic_browse |
ddc 620 misc Automation misc Bounding box regression misc Computer vision misc Detection misc Image processing misc Library misc OpenCV misc Robot |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
International journal of intelligent robotics and applications |
hierarchy_parent_id |
876318030 |
dewey-tens |
620 - Engineering 670 - Manufacturing |
hierarchy_top_title |
International journal of intelligent robotics and applications |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)876318030 (DE-600)2879694-9 |
title |
Experiments with cooperative robots that can detect object’s shape, color and size to perform tasks in industrial workplaces |
ctrlnum |
(DE-627)SPR05520256X (SPR)s41315-023-00305-y-e |
title_full |
Experiments with cooperative robots that can detect object’s shape, color and size to perform tasks in industrial workplaces |
author_sort |
Titu, Md Fahim Shahoriar |
journal |
International journal of intelligent robotics and applications |
journalStr |
International journal of intelligent robotics and applications |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
600 - Technology |
recordtype |
marc |
publishDateSort |
2023 |
contenttype_str_mv |
txt |
container_start_page |
179 |
author_browse |
Titu, Md Fahim Shahoriar Haque, S. M. Rezwanul Islam, Rifad Hossain, Akram Qayum, Mohammad Abdul Khan, Riasat |
container_volume |
8 |
class |
620 670 VZ |
format_se |
Elektronische Aufsätze |
author-letter |
Titu, Md Fahim Shahoriar |
doi_str_mv |
10.1007/s41315-023-00305-y |
normlink |
(ORCID)0000-0002-5429-2235 |
normlink_prefix_str_mv |
(orcid)0000-0002-5429-2235 |
dewey-full |
620 670 |
title_sort |
experiments with cooperative robots that can detect object’s shape, color and size to perform tasks in industrial workplaces |
title_auth |
Experiments with cooperative robots that can detect object’s shape, color and size to perform tasks in industrial workplaces |
abstract |
Abstract Automation and human-robot collaboration are increasing in modern workplaces such as industrial manufacturing. Nowadays, humans rely heavily on advanced robotic devices to perform tasks quickly and accurately. Modern robots with computer vision and artificial intelligence are gaining attention and popularity rapidly. This paper demonstrates how a robot can automatically detect an object’s shape, color, and size using computer vision techniques and act based on information feedback. In this work, a powerful computational model for a robot has been developed that distinguishes an object’s shape, size, and color in real time with high accuracy. Then it can integrate a robotic arm to pick a specific object. A dataset of 6558 images of various monochromatic objects has been developed, containing three colors against a white background and five shapes for the research. The designed system for detection has achieved 99.8% success in an object’s shape detection. Also, the system demonstrated 100% success in the object’s color and size detection with the OpenCV image processing framework. On the other hand, the prototype robotic system based on Raspberry Pi-4B has achieved 80.7% accuracy for geometrical shape detection and 81.07%, and 59.77% accuracy for color recognition and distance measurement, respectively. Moreover, the system guided a robotic arm to pick up the object based on its color and shape with a mean response time of 19 seconds. The idea is to simulate a workplace environment where a worker will ask the robotic systems to perform a task on a specific object. Our robotic system can accurately identify the object’s attributes (e.g., 100%) and is able to perform the task reliably (81%). However, reliability can be improved by using a more powerful computing system, such as the robotic prototype. The article’s contribution is to use a cutting-edge computer vision technique to detect and categorize objects with the help of a small private dataset to shorten the training duration and enable the suggested system to adapt to components that may be needed for creating a new industrial product in a shorter period. The source code and images of the collected dataset can be found at: https://github.com/TituShahoriar/cse499B_Hardware_Proposed_System. © The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
abstractGer |
Abstract Automation and human-robot collaboration are increasing in modern workplaces such as industrial manufacturing. Nowadays, humans rely heavily on advanced robotic devices to perform tasks quickly and accurately. Modern robots with computer vision and artificial intelligence are gaining attention and popularity rapidly. This paper demonstrates how a robot can automatically detect an object’s shape, color, and size using computer vision techniques and act based on information feedback. In this work, a powerful computational model for a robot has been developed that distinguishes an object’s shape, size, and color in real time with high accuracy. Then it can integrate a robotic arm to pick a specific object. A dataset of 6558 images of various monochromatic objects has been developed, containing three colors against a white background and five shapes for the research. The designed system for detection has achieved 99.8% success in an object’s shape detection. Also, the system demonstrated 100% success in the object’s color and size detection with the OpenCV image processing framework. On the other hand, the prototype robotic system based on Raspberry Pi-4B has achieved 80.7% accuracy for geometrical shape detection and 81.07%, and 59.77% accuracy for color recognition and distance measurement, respectively. Moreover, the system guided a robotic arm to pick up the object based on its color and shape with a mean response time of 19 seconds. The idea is to simulate a workplace environment where a worker will ask the robotic systems to perform a task on a specific object. Our robotic system can accurately identify the object’s attributes (e.g., 100%) and is able to perform the task reliably (81%). However, reliability can be improved by using a more powerful computing system, such as the robotic prototype. The article’s contribution is to use a cutting-edge computer vision technique to detect and categorize objects with the help of a small private dataset to shorten the training duration and enable the suggested system to adapt to components that may be needed for creating a new industrial product in a shorter period. The source code and images of the collected dataset can be found at: https://github.com/TituShahoriar/cse499B_Hardware_Proposed_System. © The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
abstract_unstemmed |
Abstract Automation and human-robot collaboration are increasing in modern workplaces such as industrial manufacturing. Nowadays, humans rely heavily on advanced robotic devices to perform tasks quickly and accurately. Modern robots with computer vision and artificial intelligence are gaining attention and popularity rapidly. This paper demonstrates how a robot can automatically detect an object’s shape, color, and size using computer vision techniques and act based on information feedback. In this work, a powerful computational model for a robot has been developed that distinguishes an object’s shape, size, and color in real time with high accuracy. Then it can integrate a robotic arm to pick a specific object. A dataset of 6558 images of various monochromatic objects has been developed, containing three colors against a white background and five shapes for the research. The designed system for detection has achieved 99.8% success in an object’s shape detection. Also, the system demonstrated 100% success in the object’s color and size detection with the OpenCV image processing framework. On the other hand, the prototype robotic system based on Raspberry Pi-4B has achieved 80.7% accuracy for geometrical shape detection and 81.07%, and 59.77% accuracy for color recognition and distance measurement, respectively. Moreover, the system guided a robotic arm to pick up the object based on its color and shape with a mean response time of 19 seconds. The idea is to simulate a workplace environment where a worker will ask the robotic systems to perform a task on a specific object. Our robotic system can accurately identify the object’s attributes (e.g., 100%) and is able to perform the task reliably (81%). However, reliability can be improved by using a more powerful computing system, such as the robotic prototype. The article’s contribution is to use a cutting-edge computer vision technique to detect and categorize objects with the help of a small private dataset to shorten the training duration and enable the suggested system to adapt to components that may be needed for creating a new industrial product in a shorter period. The source code and images of the collected dataset can be found at: https://github.com/TituShahoriar/cse499B_Hardware_Proposed_System. © The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
collection_details |
SYSFLAG_0 GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_266 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 |
container_issue |
1 |
title_short |
Experiments with cooperative robots that can detect object’s shape, color and size to perform tasks in industrial workplaces |
url |
https://dx.doi.org/10.1007/s41315-023-00305-y |
remote_bool |
true |
author2 |
Haque, S. M. Rezwanul Islam, Rifad Hossain, Akram Qayum, Mohammad Abdul Khan, Riasat |
author2Str |
Haque, S. M. Rezwanul Islam, Rifad Hossain, Akram Qayum, Mohammad Abdul Khan, Riasat |
ppnlink |
876318030 |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s41315-023-00305-y |
up_date |
2024-07-03T14:02:05.039Z |
_version_ |
1803566789489065984 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">SPR05520256X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240319064724.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">240319s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s41315-023-00305-y</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR05520256X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)s41315-023-00305-y-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">620</subfield><subfield code="a">670</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">620</subfield><subfield code="a">670</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Titu, Md Fahim Shahoriar</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Experiments with cooperative robots that can detect object’s shape, color and size to perform tasks in industrial workplaces</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Automation and human-robot collaboration are increasing in modern workplaces such as industrial manufacturing. Nowadays, humans rely heavily on advanced robotic devices to perform tasks quickly and accurately. Modern robots with computer vision and artificial intelligence are gaining attention and popularity rapidly. This paper demonstrates how a robot can automatically detect an object’s shape, color, and size using computer vision techniques and act based on information feedback. In this work, a powerful computational model for a robot has been developed that distinguishes an object’s shape, size, and color in real time with high accuracy. Then it can integrate a robotic arm to pick a specific object. A dataset of 6558 images of various monochromatic objects has been developed, containing three colors against a white background and five shapes for the research. The designed system for detection has achieved 99.8% success in an object’s shape detection. Also, the system demonstrated 100% success in the object’s color and size detection with the OpenCV image processing framework. On the other hand, the prototype robotic system based on Raspberry Pi-4B has achieved 80.7% accuracy for geometrical shape detection and 81.07%, and 59.77% accuracy for color recognition and distance measurement, respectively. Moreover, the system guided a robotic arm to pick up the object based on its color and shape with a mean response time of 19 seconds. The idea is to simulate a workplace environment where a worker will ask the robotic systems to perform a task on a specific object. Our robotic system can accurately identify the object’s attributes (e.g., 100%) and is able to perform the task reliably (81%). However, reliability can be improved by using a more powerful computing system, such as the robotic prototype. The article’s contribution is to use a cutting-edge computer vision technique to detect and categorize objects with the help of a small private dataset to shorten the training duration and enable the suggested system to adapt to components that may be needed for creating a new industrial product in a shorter period. The source code and images of the collected dataset can be found at: https://github.com/TituShahoriar/cse499B_Hardware_Proposed_System.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Automation</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Bounding box regression</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Computer vision</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Detection</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Image processing</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Library</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">OpenCV</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Robot</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Haque, S. M. Rezwanul</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Islam, Rifad</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Hossain, Akram</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Qayum, Mohammad Abdul</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Khan, Riasat</subfield><subfield code="0">(orcid)0000-0002-5429-2235</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">International journal of intelligent robotics and applications</subfield><subfield code="d">Springer Nature Singapore, 2017</subfield><subfield code="g">8(2023), 1 vom: 25. Nov., Seite 179-192</subfield><subfield code="w">(DE-627)876318030</subfield><subfield code="w">(DE-600)2879694-9</subfield><subfield code="x">2366-598X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:8</subfield><subfield code="g">year:2023</subfield><subfield code="g">number:1</subfield><subfield code="g">day:25</subfield><subfield code="g">month:11</subfield><subfield code="g">pages:179-192</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.1007/s41315-023-00305-y</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_0</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_138</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_266</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_281</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2031</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2039</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2093</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2107</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2188</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2446</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2472</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2548</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4246</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4328</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">8</subfield><subfield code="j">2023</subfield><subfield code="e">1</subfield><subfield code="b">25</subfield><subfield code="c">11</subfield><subfield code="h">179-192</subfield></datafield></record></collection>
|
score |
7.3975716 |