Hybrid approach for suspicious object surveillance using video clips and UAV images in cloud-IoT-based computing environment
Abstract In broad terms, it can be said that the existing video surveillance systems with automatic face recognition and manual face detection in crewless aerial vehicles (UAVs’) video frames did not have recognition accuracy above 90%. It is because of using a limited amount of Eigenfaces for the p...
Ausführliche Beschreibung
Autor*in: |
Ahamad, Rayees [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2023 |
---|
Schlagwörter: |
---|
Anmerkung: |
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
---|
Übergeordnetes Werk: |
Enthalten in: Cluster computing - Dordrecht [u.a.] : Springer Science + Business Media B.V, 1998, 27(2023), 1 vom: 17. Feb., Seite 761-785 |
---|---|
Übergeordnetes Werk: |
volume:27 ; year:2023 ; number:1 ; day:17 ; month:02 ; pages:761-785 |
Links: |
---|
DOI / URN: |
10.1007/s10586-023-03977-0 |
---|
Katalog-ID: |
SPR054884306 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | SPR054884306 | ||
003 | DE-627 | ||
005 | 20240226100054.0 | ||
007 | cr uuu---uuuuu | ||
008 | 240226s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1007/s10586-023-03977-0 |2 doi | |
035 | |a (DE-627)SPR054884306 | ||
035 | |a (SPR)s10586-023-03977-0-e | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Ahamad, Rayees |e verfasserin |4 aut | |
245 | 1 | 0 | |a Hybrid approach for suspicious object surveillance using video clips and UAV images in cloud-IoT-based computing environment |
264 | 1 | |c 2023 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
500 | |a © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. | ||
520 | |a Abstract In broad terms, it can be said that the existing video surveillance systems with automatic face recognition and manual face detection in crewless aerial vehicles (UAVs’) video frames did not have recognition accuracy above 90%. It is because of using a limited amount of Eigenfaces for the principal component analysis (PCA) transform. Detecting a face in a cloud-IoT video frame involves extricating video/image windows into two basic classes: one class will contain faces (training the surroundings) and the other will contain matching (in the foreground). The problem of face detection is further convoluted by using inconsistent video/image qualities, lighting conditions, and geometries, and including the options of inequitable occlusion and disguise. Therefore, an ultimate face detection technique in video frames/images would be intelligent enough to identify the existence of any face in any set of foreground/background and lighting conditions. The authors further found that the implementation of a completely automated iris image-based face recognition and detection system can be useful for observation purposes like ATM (automated teller machine) user security whereas the proposed and implemented automated face recognition using UAVs video frames in cloud-IoT integrated distributive computing is ideal for mug-shot matching and surveillance of suspicious objects. It is because of the presence of controlled conditions while gathering mug shots. The proposed hybrid face detection/object detection-based video surveillance system was tested under very robust conditions in a cloud-IoT (Internet of Things) integrated distributive computing environment and it is envisaged in the experimental study that the real-world performance of the proposed hybrid system will have far better accuracy than the existing systems. The authors observed during experiments that the proposed cloud-IoT integrated hybrid approach for video surveillance of suspicious objects in a distributed real-time environment has high accuracy, low (overall error rate), and very high (average recall rate) for self-generated real-time datasets and benchmark datasets of KTH, COCO Detection, ImageNet, Kaggle, Vlomonaco Github, Pascal VOC, UCF-11, AVA, Collective Action, MSRA-B, SegTrack, ViSal, SegTrack V2, VOS, VOS-N, and DAVIS. The results of the proposed cloud-IoT integrated video surveillance system re-insure that the choices selected by the authors are reliable, efficient, and robust. Further, these results are adequate for the surveillance of suspicious objects using UAV video clips/images in a cloud-IoT integrated environment but further improvements in results are always possible and enviable. | ||
650 | 4 | |a Cloud-IoT environment |7 (dpeaa)DE-He213 | |
650 | 4 | |a Distributive environment |7 (dpeaa)DE-He213 | |
650 | 4 | |a OpenCV software tools |7 (dpeaa)DE-He213 | |
650 | 4 | |a Suspicious object detection |7 (dpeaa)DE-He213 | |
650 | 4 | |a Video frames |7 (dpeaa)DE-He213 | |
650 | 4 | |a Video surveillance |7 (dpeaa)DE-He213 | |
650 | 4 | |a Wireless networks |7 (dpeaa)DE-He213 | |
700 | 1 | |a Mishra, Kamta Nath |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Cluster computing |d Dordrecht [u.a.] : Springer Science + Business Media B.V, 1998 |g 27(2023), 1 vom: 17. Feb., Seite 761-785 |w (DE-627)320505332 |w (DE-600)2012757-1 |x 1573-7543 |7 nnns |
773 | 1 | 8 | |g volume:27 |g year:2023 |g number:1 |g day:17 |g month:02 |g pages:761-785 |
856 | 4 | 0 | |u https://dx.doi.org/10.1007/s10586-023-03977-0 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_SPRINGER | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_101 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_120 | ||
912 | |a GBV_ILN_138 | ||
912 | |a GBV_ILN_150 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_152 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_171 | ||
912 | |a GBV_ILN_187 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_250 | ||
912 | |a GBV_ILN_281 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_636 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2001 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2006 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2008 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2031 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2037 | ||
912 | |a GBV_ILN_2038 | ||
912 | |a GBV_ILN_2039 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2057 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2065 | ||
912 | |a GBV_ILN_2068 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2093 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2107 | ||
912 | |a GBV_ILN_2108 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2113 | ||
912 | |a GBV_ILN_2118 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2144 | ||
912 | |a GBV_ILN_2147 | ||
912 | |a GBV_ILN_2148 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2188 | ||
912 | |a GBV_ILN_2190 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2446 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2472 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_2522 | ||
912 | |a GBV_ILN_2548 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4046 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4246 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4328 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4336 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 27 |j 2023 |e 1 |b 17 |c 02 |h 761-785 |
author_variant |
r a ra k n m kn knm |
---|---|
matchkey_str |
article:15737543:2023----::yrdprahossiiuojcsrelacuigiecisnuvmgsnlu |
hierarchy_sort_str |
2023 |
publishDate |
2023 |
allfields |
10.1007/s10586-023-03977-0 doi (DE-627)SPR054884306 (SPR)s10586-023-03977-0-e DE-627 ger DE-627 rakwb eng Ahamad, Rayees verfasserin aut Hybrid approach for suspicious object surveillance using video clips and UAV images in cloud-IoT-based computing environment 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Abstract In broad terms, it can be said that the existing video surveillance systems with automatic face recognition and manual face detection in crewless aerial vehicles (UAVs’) video frames did not have recognition accuracy above 90%. It is because of using a limited amount of Eigenfaces for the principal component analysis (PCA) transform. Detecting a face in a cloud-IoT video frame involves extricating video/image windows into two basic classes: one class will contain faces (training the surroundings) and the other will contain matching (in the foreground). The problem of face detection is further convoluted by using inconsistent video/image qualities, lighting conditions, and geometries, and including the options of inequitable occlusion and disguise. Therefore, an ultimate face detection technique in video frames/images would be intelligent enough to identify the existence of any face in any set of foreground/background and lighting conditions. The authors further found that the implementation of a completely automated iris image-based face recognition and detection system can be useful for observation purposes like ATM (automated teller machine) user security whereas the proposed and implemented automated face recognition using UAVs video frames in cloud-IoT integrated distributive computing is ideal for mug-shot matching and surveillance of suspicious objects. It is because of the presence of controlled conditions while gathering mug shots. The proposed hybrid face detection/object detection-based video surveillance system was tested under very robust conditions in a cloud-IoT (Internet of Things) integrated distributive computing environment and it is envisaged in the experimental study that the real-world performance of the proposed hybrid system will have far better accuracy than the existing systems. The authors observed during experiments that the proposed cloud-IoT integrated hybrid approach for video surveillance of suspicious objects in a distributed real-time environment has high accuracy, low (overall error rate), and very high (average recall rate) for self-generated real-time datasets and benchmark datasets of KTH, COCO Detection, ImageNet, Kaggle, Vlomonaco Github, Pascal VOC, UCF-11, AVA, Collective Action, MSRA-B, SegTrack, ViSal, SegTrack V2, VOS, VOS-N, and DAVIS. The results of the proposed cloud-IoT integrated video surveillance system re-insure that the choices selected by the authors are reliable, efficient, and robust. Further, these results are adequate for the surveillance of suspicious objects using UAV video clips/images in a cloud-IoT integrated environment but further improvements in results are always possible and enviable. Cloud-IoT environment (dpeaa)DE-He213 Distributive environment (dpeaa)DE-He213 OpenCV software tools (dpeaa)DE-He213 Suspicious object detection (dpeaa)DE-He213 Video frames (dpeaa)DE-He213 Video surveillance (dpeaa)DE-He213 Wireless networks (dpeaa)DE-He213 Mishra, Kamta Nath aut Enthalten in Cluster computing Dordrecht [u.a.] : Springer Science + Business Media B.V, 1998 27(2023), 1 vom: 17. Feb., Seite 761-785 (DE-627)320505332 (DE-600)2012757-1 1573-7543 nnns volume:27 year:2023 number:1 day:17 month:02 pages:761-785 https://dx.doi.org/10.1007/s10586-023-03977-0 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 27 2023 1 17 02 761-785 |
spelling |
10.1007/s10586-023-03977-0 doi (DE-627)SPR054884306 (SPR)s10586-023-03977-0-e DE-627 ger DE-627 rakwb eng Ahamad, Rayees verfasserin aut Hybrid approach for suspicious object surveillance using video clips and UAV images in cloud-IoT-based computing environment 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Abstract In broad terms, it can be said that the existing video surveillance systems with automatic face recognition and manual face detection in crewless aerial vehicles (UAVs’) video frames did not have recognition accuracy above 90%. It is because of using a limited amount of Eigenfaces for the principal component analysis (PCA) transform. Detecting a face in a cloud-IoT video frame involves extricating video/image windows into two basic classes: one class will contain faces (training the surroundings) and the other will contain matching (in the foreground). The problem of face detection is further convoluted by using inconsistent video/image qualities, lighting conditions, and geometries, and including the options of inequitable occlusion and disguise. Therefore, an ultimate face detection technique in video frames/images would be intelligent enough to identify the existence of any face in any set of foreground/background and lighting conditions. The authors further found that the implementation of a completely automated iris image-based face recognition and detection system can be useful for observation purposes like ATM (automated teller machine) user security whereas the proposed and implemented automated face recognition using UAVs video frames in cloud-IoT integrated distributive computing is ideal for mug-shot matching and surveillance of suspicious objects. It is because of the presence of controlled conditions while gathering mug shots. The proposed hybrid face detection/object detection-based video surveillance system was tested under very robust conditions in a cloud-IoT (Internet of Things) integrated distributive computing environment and it is envisaged in the experimental study that the real-world performance of the proposed hybrid system will have far better accuracy than the existing systems. The authors observed during experiments that the proposed cloud-IoT integrated hybrid approach for video surveillance of suspicious objects in a distributed real-time environment has high accuracy, low (overall error rate), and very high (average recall rate) for self-generated real-time datasets and benchmark datasets of KTH, COCO Detection, ImageNet, Kaggle, Vlomonaco Github, Pascal VOC, UCF-11, AVA, Collective Action, MSRA-B, SegTrack, ViSal, SegTrack V2, VOS, VOS-N, and DAVIS. The results of the proposed cloud-IoT integrated video surveillance system re-insure that the choices selected by the authors are reliable, efficient, and robust. Further, these results are adequate for the surveillance of suspicious objects using UAV video clips/images in a cloud-IoT integrated environment but further improvements in results are always possible and enviable. Cloud-IoT environment (dpeaa)DE-He213 Distributive environment (dpeaa)DE-He213 OpenCV software tools (dpeaa)DE-He213 Suspicious object detection (dpeaa)DE-He213 Video frames (dpeaa)DE-He213 Video surveillance (dpeaa)DE-He213 Wireless networks (dpeaa)DE-He213 Mishra, Kamta Nath aut Enthalten in Cluster computing Dordrecht [u.a.] : Springer Science + Business Media B.V, 1998 27(2023), 1 vom: 17. Feb., Seite 761-785 (DE-627)320505332 (DE-600)2012757-1 1573-7543 nnns volume:27 year:2023 number:1 day:17 month:02 pages:761-785 https://dx.doi.org/10.1007/s10586-023-03977-0 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 27 2023 1 17 02 761-785 |
allfields_unstemmed |
10.1007/s10586-023-03977-0 doi (DE-627)SPR054884306 (SPR)s10586-023-03977-0-e DE-627 ger DE-627 rakwb eng Ahamad, Rayees verfasserin aut Hybrid approach for suspicious object surveillance using video clips and UAV images in cloud-IoT-based computing environment 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Abstract In broad terms, it can be said that the existing video surveillance systems with automatic face recognition and manual face detection in crewless aerial vehicles (UAVs’) video frames did not have recognition accuracy above 90%. It is because of using a limited amount of Eigenfaces for the principal component analysis (PCA) transform. Detecting a face in a cloud-IoT video frame involves extricating video/image windows into two basic classes: one class will contain faces (training the surroundings) and the other will contain matching (in the foreground). The problem of face detection is further convoluted by using inconsistent video/image qualities, lighting conditions, and geometries, and including the options of inequitable occlusion and disguise. Therefore, an ultimate face detection technique in video frames/images would be intelligent enough to identify the existence of any face in any set of foreground/background and lighting conditions. The authors further found that the implementation of a completely automated iris image-based face recognition and detection system can be useful for observation purposes like ATM (automated teller machine) user security whereas the proposed and implemented automated face recognition using UAVs video frames in cloud-IoT integrated distributive computing is ideal for mug-shot matching and surveillance of suspicious objects. It is because of the presence of controlled conditions while gathering mug shots. The proposed hybrid face detection/object detection-based video surveillance system was tested under very robust conditions in a cloud-IoT (Internet of Things) integrated distributive computing environment and it is envisaged in the experimental study that the real-world performance of the proposed hybrid system will have far better accuracy than the existing systems. The authors observed during experiments that the proposed cloud-IoT integrated hybrid approach for video surveillance of suspicious objects in a distributed real-time environment has high accuracy, low (overall error rate), and very high (average recall rate) for self-generated real-time datasets and benchmark datasets of KTH, COCO Detection, ImageNet, Kaggle, Vlomonaco Github, Pascal VOC, UCF-11, AVA, Collective Action, MSRA-B, SegTrack, ViSal, SegTrack V2, VOS, VOS-N, and DAVIS. The results of the proposed cloud-IoT integrated video surveillance system re-insure that the choices selected by the authors are reliable, efficient, and robust. Further, these results are adequate for the surveillance of suspicious objects using UAV video clips/images in a cloud-IoT integrated environment but further improvements in results are always possible and enviable. Cloud-IoT environment (dpeaa)DE-He213 Distributive environment (dpeaa)DE-He213 OpenCV software tools (dpeaa)DE-He213 Suspicious object detection (dpeaa)DE-He213 Video frames (dpeaa)DE-He213 Video surveillance (dpeaa)DE-He213 Wireless networks (dpeaa)DE-He213 Mishra, Kamta Nath aut Enthalten in Cluster computing Dordrecht [u.a.] : Springer Science + Business Media B.V, 1998 27(2023), 1 vom: 17. Feb., Seite 761-785 (DE-627)320505332 (DE-600)2012757-1 1573-7543 nnns volume:27 year:2023 number:1 day:17 month:02 pages:761-785 https://dx.doi.org/10.1007/s10586-023-03977-0 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 27 2023 1 17 02 761-785 |
allfieldsGer |
10.1007/s10586-023-03977-0 doi (DE-627)SPR054884306 (SPR)s10586-023-03977-0-e DE-627 ger DE-627 rakwb eng Ahamad, Rayees verfasserin aut Hybrid approach for suspicious object surveillance using video clips and UAV images in cloud-IoT-based computing environment 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Abstract In broad terms, it can be said that the existing video surveillance systems with automatic face recognition and manual face detection in crewless aerial vehicles (UAVs’) video frames did not have recognition accuracy above 90%. It is because of using a limited amount of Eigenfaces for the principal component analysis (PCA) transform. Detecting a face in a cloud-IoT video frame involves extricating video/image windows into two basic classes: one class will contain faces (training the surroundings) and the other will contain matching (in the foreground). The problem of face detection is further convoluted by using inconsistent video/image qualities, lighting conditions, and geometries, and including the options of inequitable occlusion and disguise. Therefore, an ultimate face detection technique in video frames/images would be intelligent enough to identify the existence of any face in any set of foreground/background and lighting conditions. The authors further found that the implementation of a completely automated iris image-based face recognition and detection system can be useful for observation purposes like ATM (automated teller machine) user security whereas the proposed and implemented automated face recognition using UAVs video frames in cloud-IoT integrated distributive computing is ideal for mug-shot matching and surveillance of suspicious objects. It is because of the presence of controlled conditions while gathering mug shots. The proposed hybrid face detection/object detection-based video surveillance system was tested under very robust conditions in a cloud-IoT (Internet of Things) integrated distributive computing environment and it is envisaged in the experimental study that the real-world performance of the proposed hybrid system will have far better accuracy than the existing systems. The authors observed during experiments that the proposed cloud-IoT integrated hybrid approach for video surveillance of suspicious objects in a distributed real-time environment has high accuracy, low (overall error rate), and very high (average recall rate) for self-generated real-time datasets and benchmark datasets of KTH, COCO Detection, ImageNet, Kaggle, Vlomonaco Github, Pascal VOC, UCF-11, AVA, Collective Action, MSRA-B, SegTrack, ViSal, SegTrack V2, VOS, VOS-N, and DAVIS. The results of the proposed cloud-IoT integrated video surveillance system re-insure that the choices selected by the authors are reliable, efficient, and robust. Further, these results are adequate for the surveillance of suspicious objects using UAV video clips/images in a cloud-IoT integrated environment but further improvements in results are always possible and enviable. Cloud-IoT environment (dpeaa)DE-He213 Distributive environment (dpeaa)DE-He213 OpenCV software tools (dpeaa)DE-He213 Suspicious object detection (dpeaa)DE-He213 Video frames (dpeaa)DE-He213 Video surveillance (dpeaa)DE-He213 Wireless networks (dpeaa)DE-He213 Mishra, Kamta Nath aut Enthalten in Cluster computing Dordrecht [u.a.] : Springer Science + Business Media B.V, 1998 27(2023), 1 vom: 17. Feb., Seite 761-785 (DE-627)320505332 (DE-600)2012757-1 1573-7543 nnns volume:27 year:2023 number:1 day:17 month:02 pages:761-785 https://dx.doi.org/10.1007/s10586-023-03977-0 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 27 2023 1 17 02 761-785 |
allfieldsSound |
10.1007/s10586-023-03977-0 doi (DE-627)SPR054884306 (SPR)s10586-023-03977-0-e DE-627 ger DE-627 rakwb eng Ahamad, Rayees verfasserin aut Hybrid approach for suspicious object surveillance using video clips and UAV images in cloud-IoT-based computing environment 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Abstract In broad terms, it can be said that the existing video surveillance systems with automatic face recognition and manual face detection in crewless aerial vehicles (UAVs’) video frames did not have recognition accuracy above 90%. It is because of using a limited amount of Eigenfaces for the principal component analysis (PCA) transform. Detecting a face in a cloud-IoT video frame involves extricating video/image windows into two basic classes: one class will contain faces (training the surroundings) and the other will contain matching (in the foreground). The problem of face detection is further convoluted by using inconsistent video/image qualities, lighting conditions, and geometries, and including the options of inequitable occlusion and disguise. Therefore, an ultimate face detection technique in video frames/images would be intelligent enough to identify the existence of any face in any set of foreground/background and lighting conditions. The authors further found that the implementation of a completely automated iris image-based face recognition and detection system can be useful for observation purposes like ATM (automated teller machine) user security whereas the proposed and implemented automated face recognition using UAVs video frames in cloud-IoT integrated distributive computing is ideal for mug-shot matching and surveillance of suspicious objects. It is because of the presence of controlled conditions while gathering mug shots. The proposed hybrid face detection/object detection-based video surveillance system was tested under very robust conditions in a cloud-IoT (Internet of Things) integrated distributive computing environment and it is envisaged in the experimental study that the real-world performance of the proposed hybrid system will have far better accuracy than the existing systems. The authors observed during experiments that the proposed cloud-IoT integrated hybrid approach for video surveillance of suspicious objects in a distributed real-time environment has high accuracy, low (overall error rate), and very high (average recall rate) for self-generated real-time datasets and benchmark datasets of KTH, COCO Detection, ImageNet, Kaggle, Vlomonaco Github, Pascal VOC, UCF-11, AVA, Collective Action, MSRA-B, SegTrack, ViSal, SegTrack V2, VOS, VOS-N, and DAVIS. The results of the proposed cloud-IoT integrated video surveillance system re-insure that the choices selected by the authors are reliable, efficient, and robust. Further, these results are adequate for the surveillance of suspicious objects using UAV video clips/images in a cloud-IoT integrated environment but further improvements in results are always possible and enviable. Cloud-IoT environment (dpeaa)DE-He213 Distributive environment (dpeaa)DE-He213 OpenCV software tools (dpeaa)DE-He213 Suspicious object detection (dpeaa)DE-He213 Video frames (dpeaa)DE-He213 Video surveillance (dpeaa)DE-He213 Wireless networks (dpeaa)DE-He213 Mishra, Kamta Nath aut Enthalten in Cluster computing Dordrecht [u.a.] : Springer Science + Business Media B.V, 1998 27(2023), 1 vom: 17. Feb., Seite 761-785 (DE-627)320505332 (DE-600)2012757-1 1573-7543 nnns volume:27 year:2023 number:1 day:17 month:02 pages:761-785 https://dx.doi.org/10.1007/s10586-023-03977-0 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 27 2023 1 17 02 761-785 |
language |
English |
source |
Enthalten in Cluster computing 27(2023), 1 vom: 17. Feb., Seite 761-785 volume:27 year:2023 number:1 day:17 month:02 pages:761-785 |
sourceStr |
Enthalten in Cluster computing 27(2023), 1 vom: 17. Feb., Seite 761-785 volume:27 year:2023 number:1 day:17 month:02 pages:761-785 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Cloud-IoT environment Distributive environment OpenCV software tools Suspicious object detection Video frames Video surveillance Wireless networks |
isfreeaccess_bool |
false |
container_title |
Cluster computing |
authorswithroles_txt_mv |
Ahamad, Rayees @@aut@@ Mishra, Kamta Nath @@aut@@ |
publishDateDaySort_date |
2023-02-17T00:00:00Z |
hierarchy_top_id |
320505332 |
id |
SPR054884306 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">SPR054884306</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240226100054.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">240226s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s10586-023-03977-0</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR054884306</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)s10586-023-03977-0-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Ahamad, Rayees</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Hybrid approach for suspicious object surveillance using video clips and UAV images in cloud-IoT-based computing environment</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract In broad terms, it can be said that the existing video surveillance systems with automatic face recognition and manual face detection in crewless aerial vehicles (UAVs’) video frames did not have recognition accuracy above 90%. It is because of using a limited amount of Eigenfaces for the principal component analysis (PCA) transform. Detecting a face in a cloud-IoT video frame involves extricating video/image windows into two basic classes: one class will contain faces (training the surroundings) and the other will contain matching (in the foreground). The problem of face detection is further convoluted by using inconsistent video/image qualities, lighting conditions, and geometries, and including the options of inequitable occlusion and disguise. Therefore, an ultimate face detection technique in video frames/images would be intelligent enough to identify the existence of any face in any set of foreground/background and lighting conditions. The authors further found that the implementation of a completely automated iris image-based face recognition and detection system can be useful for observation purposes like ATM (automated teller machine) user security whereas the proposed and implemented automated face recognition using UAVs video frames in cloud-IoT integrated distributive computing is ideal for mug-shot matching and surveillance of suspicious objects. It is because of the presence of controlled conditions while gathering mug shots. The proposed hybrid face detection/object detection-based video surveillance system was tested under very robust conditions in a cloud-IoT (Internet of Things) integrated distributive computing environment and it is envisaged in the experimental study that the real-world performance of the proposed hybrid system will have far better accuracy than the existing systems. The authors observed during experiments that the proposed cloud-IoT integrated hybrid approach for video surveillance of suspicious objects in a distributed real-time environment has high accuracy, low (overall error rate), and very high (average recall rate) for self-generated real-time datasets and benchmark datasets of KTH, COCO Detection, ImageNet, Kaggle, Vlomonaco Github, Pascal VOC, UCF-11, AVA, Collective Action, MSRA-B, SegTrack, ViSal, SegTrack V2, VOS, VOS-N, and DAVIS. The results of the proposed cloud-IoT integrated video surveillance system re-insure that the choices selected by the authors are reliable, efficient, and robust. Further, these results are adequate for the surveillance of suspicious objects using UAV video clips/images in a cloud-IoT integrated environment but further improvements in results are always possible and enviable.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Cloud-IoT environment</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Distributive environment</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">OpenCV software tools</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Suspicious object detection</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Video frames</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Video surveillance</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Wireless networks</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Mishra, Kamta Nath</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Cluster computing</subfield><subfield code="d">Dordrecht [u.a.] : Springer Science + Business Media B.V, 1998</subfield><subfield code="g">27(2023), 1 vom: 17. Feb., Seite 761-785</subfield><subfield code="w">(DE-627)320505332</subfield><subfield code="w">(DE-600)2012757-1</subfield><subfield code="x">1573-7543</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:27</subfield><subfield code="g">year:2023</subfield><subfield code="g">number:1</subfield><subfield code="g">day:17</subfield><subfield code="g">month:02</subfield><subfield code="g">pages:761-785</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.1007/s10586-023-03977-0</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_138</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_281</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2031</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2039</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2093</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2107</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2188</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2446</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2472</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2548</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4246</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4328</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">27</subfield><subfield code="j">2023</subfield><subfield code="e">1</subfield><subfield code="b">17</subfield><subfield code="c">02</subfield><subfield code="h">761-785</subfield></datafield></record></collection>
|
author |
Ahamad, Rayees |
spellingShingle |
Ahamad, Rayees misc Cloud-IoT environment misc Distributive environment misc OpenCV software tools misc Suspicious object detection misc Video frames misc Video surveillance misc Wireless networks Hybrid approach for suspicious object surveillance using video clips and UAV images in cloud-IoT-based computing environment |
authorStr |
Ahamad, Rayees |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)320505332 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut |
collection |
springer |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
1573-7543 |
topic_title |
Hybrid approach for suspicious object surveillance using video clips and UAV images in cloud-IoT-based computing environment Cloud-IoT environment (dpeaa)DE-He213 Distributive environment (dpeaa)DE-He213 OpenCV software tools (dpeaa)DE-He213 Suspicious object detection (dpeaa)DE-He213 Video frames (dpeaa)DE-He213 Video surveillance (dpeaa)DE-He213 Wireless networks (dpeaa)DE-He213 |
topic |
misc Cloud-IoT environment misc Distributive environment misc OpenCV software tools misc Suspicious object detection misc Video frames misc Video surveillance misc Wireless networks |
topic_unstemmed |
misc Cloud-IoT environment misc Distributive environment misc OpenCV software tools misc Suspicious object detection misc Video frames misc Video surveillance misc Wireless networks |
topic_browse |
misc Cloud-IoT environment misc Distributive environment misc OpenCV software tools misc Suspicious object detection misc Video frames misc Video surveillance misc Wireless networks |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Cluster computing |
hierarchy_parent_id |
320505332 |
hierarchy_top_title |
Cluster computing |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)320505332 (DE-600)2012757-1 |
title |
Hybrid approach for suspicious object surveillance using video clips and UAV images in cloud-IoT-based computing environment |
ctrlnum |
(DE-627)SPR054884306 (SPR)s10586-023-03977-0-e |
title_full |
Hybrid approach for suspicious object surveillance using video clips and UAV images in cloud-IoT-based computing environment |
author_sort |
Ahamad, Rayees |
journal |
Cluster computing |
journalStr |
Cluster computing |
lang_code |
eng |
isOA_bool |
false |
recordtype |
marc |
publishDateSort |
2023 |
contenttype_str_mv |
txt |
container_start_page |
761 |
author_browse |
Ahamad, Rayees Mishra, Kamta Nath |
container_volume |
27 |
format_se |
Elektronische Aufsätze |
author-letter |
Ahamad, Rayees |
doi_str_mv |
10.1007/s10586-023-03977-0 |
title_sort |
hybrid approach for suspicious object surveillance using video clips and uav images in cloud-iot-based computing environment |
title_auth |
Hybrid approach for suspicious object surveillance using video clips and UAV images in cloud-IoT-based computing environment |
abstract |
Abstract In broad terms, it can be said that the existing video surveillance systems with automatic face recognition and manual face detection in crewless aerial vehicles (UAVs’) video frames did not have recognition accuracy above 90%. It is because of using a limited amount of Eigenfaces for the principal component analysis (PCA) transform. Detecting a face in a cloud-IoT video frame involves extricating video/image windows into two basic classes: one class will contain faces (training the surroundings) and the other will contain matching (in the foreground). The problem of face detection is further convoluted by using inconsistent video/image qualities, lighting conditions, and geometries, and including the options of inequitable occlusion and disguise. Therefore, an ultimate face detection technique in video frames/images would be intelligent enough to identify the existence of any face in any set of foreground/background and lighting conditions. The authors further found that the implementation of a completely automated iris image-based face recognition and detection system can be useful for observation purposes like ATM (automated teller machine) user security whereas the proposed and implemented automated face recognition using UAVs video frames in cloud-IoT integrated distributive computing is ideal for mug-shot matching and surveillance of suspicious objects. It is because of the presence of controlled conditions while gathering mug shots. The proposed hybrid face detection/object detection-based video surveillance system was tested under very robust conditions in a cloud-IoT (Internet of Things) integrated distributive computing environment and it is envisaged in the experimental study that the real-world performance of the proposed hybrid system will have far better accuracy than the existing systems. The authors observed during experiments that the proposed cloud-IoT integrated hybrid approach for video surveillance of suspicious objects in a distributed real-time environment has high accuracy, low (overall error rate), and very high (average recall rate) for self-generated real-time datasets and benchmark datasets of KTH, COCO Detection, ImageNet, Kaggle, Vlomonaco Github, Pascal VOC, UCF-11, AVA, Collective Action, MSRA-B, SegTrack, ViSal, SegTrack V2, VOS, VOS-N, and DAVIS. The results of the proposed cloud-IoT integrated video surveillance system re-insure that the choices selected by the authors are reliable, efficient, and robust. Further, these results are adequate for the surveillance of suspicious objects using UAV video clips/images in a cloud-IoT integrated environment but further improvements in results are always possible and enviable. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
abstractGer |
Abstract In broad terms, it can be said that the existing video surveillance systems with automatic face recognition and manual face detection in crewless aerial vehicles (UAVs’) video frames did not have recognition accuracy above 90%. It is because of using a limited amount of Eigenfaces for the principal component analysis (PCA) transform. Detecting a face in a cloud-IoT video frame involves extricating video/image windows into two basic classes: one class will contain faces (training the surroundings) and the other will contain matching (in the foreground). The problem of face detection is further convoluted by using inconsistent video/image qualities, lighting conditions, and geometries, and including the options of inequitable occlusion and disguise. Therefore, an ultimate face detection technique in video frames/images would be intelligent enough to identify the existence of any face in any set of foreground/background and lighting conditions. The authors further found that the implementation of a completely automated iris image-based face recognition and detection system can be useful for observation purposes like ATM (automated teller machine) user security whereas the proposed and implemented automated face recognition using UAVs video frames in cloud-IoT integrated distributive computing is ideal for mug-shot matching and surveillance of suspicious objects. It is because of the presence of controlled conditions while gathering mug shots. The proposed hybrid face detection/object detection-based video surveillance system was tested under very robust conditions in a cloud-IoT (Internet of Things) integrated distributive computing environment and it is envisaged in the experimental study that the real-world performance of the proposed hybrid system will have far better accuracy than the existing systems. The authors observed during experiments that the proposed cloud-IoT integrated hybrid approach for video surveillance of suspicious objects in a distributed real-time environment has high accuracy, low (overall error rate), and very high (average recall rate) for self-generated real-time datasets and benchmark datasets of KTH, COCO Detection, ImageNet, Kaggle, Vlomonaco Github, Pascal VOC, UCF-11, AVA, Collective Action, MSRA-B, SegTrack, ViSal, SegTrack V2, VOS, VOS-N, and DAVIS. The results of the proposed cloud-IoT integrated video surveillance system re-insure that the choices selected by the authors are reliable, efficient, and robust. Further, these results are adequate for the surveillance of suspicious objects using UAV video clips/images in a cloud-IoT integrated environment but further improvements in results are always possible and enviable. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
abstract_unstemmed |
Abstract In broad terms, it can be said that the existing video surveillance systems with automatic face recognition and manual face detection in crewless aerial vehicles (UAVs’) video frames did not have recognition accuracy above 90%. It is because of using a limited amount of Eigenfaces for the principal component analysis (PCA) transform. Detecting a face in a cloud-IoT video frame involves extricating video/image windows into two basic classes: one class will contain faces (training the surroundings) and the other will contain matching (in the foreground). The problem of face detection is further convoluted by using inconsistent video/image qualities, lighting conditions, and geometries, and including the options of inequitable occlusion and disguise. Therefore, an ultimate face detection technique in video frames/images would be intelligent enough to identify the existence of any face in any set of foreground/background and lighting conditions. The authors further found that the implementation of a completely automated iris image-based face recognition and detection system can be useful for observation purposes like ATM (automated teller machine) user security whereas the proposed and implemented automated face recognition using UAVs video frames in cloud-IoT integrated distributive computing is ideal for mug-shot matching and surveillance of suspicious objects. It is because of the presence of controlled conditions while gathering mug shots. The proposed hybrid face detection/object detection-based video surveillance system was tested under very robust conditions in a cloud-IoT (Internet of Things) integrated distributive computing environment and it is envisaged in the experimental study that the real-world performance of the proposed hybrid system will have far better accuracy than the existing systems. The authors observed during experiments that the proposed cloud-IoT integrated hybrid approach for video surveillance of suspicious objects in a distributed real-time environment has high accuracy, low (overall error rate), and very high (average recall rate) for self-generated real-time datasets and benchmark datasets of KTH, COCO Detection, ImageNet, Kaggle, Vlomonaco Github, Pascal VOC, UCF-11, AVA, Collective Action, MSRA-B, SegTrack, ViSal, SegTrack V2, VOS, VOS-N, and DAVIS. The results of the proposed cloud-IoT integrated video surveillance system re-insure that the choices selected by the authors are reliable, efficient, and robust. Further, these results are adequate for the surveillance of suspicious objects using UAV video clips/images in a cloud-IoT integrated environment but further improvements in results are always possible and enviable. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 |
container_issue |
1 |
title_short |
Hybrid approach for suspicious object surveillance using video clips and UAV images in cloud-IoT-based computing environment |
url |
https://dx.doi.org/10.1007/s10586-023-03977-0 |
remote_bool |
true |
author2 |
Mishra, Kamta Nath |
author2Str |
Mishra, Kamta Nath |
ppnlink |
320505332 |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s10586-023-03977-0 |
up_date |
2024-07-04T03:22:48.507Z |
_version_ |
1803617166707130368 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">SPR054884306</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240226100054.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">240226s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s10586-023-03977-0</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR054884306</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)s10586-023-03977-0-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Ahamad, Rayees</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Hybrid approach for suspicious object surveillance using video clips and UAV images in cloud-IoT-based computing environment</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract In broad terms, it can be said that the existing video surveillance systems with automatic face recognition and manual face detection in crewless aerial vehicles (UAVs’) video frames did not have recognition accuracy above 90%. It is because of using a limited amount of Eigenfaces for the principal component analysis (PCA) transform. Detecting a face in a cloud-IoT video frame involves extricating video/image windows into two basic classes: one class will contain faces (training the surroundings) and the other will contain matching (in the foreground). The problem of face detection is further convoluted by using inconsistent video/image qualities, lighting conditions, and geometries, and including the options of inequitable occlusion and disguise. Therefore, an ultimate face detection technique in video frames/images would be intelligent enough to identify the existence of any face in any set of foreground/background and lighting conditions. The authors further found that the implementation of a completely automated iris image-based face recognition and detection system can be useful for observation purposes like ATM (automated teller machine) user security whereas the proposed and implemented automated face recognition using UAVs video frames in cloud-IoT integrated distributive computing is ideal for mug-shot matching and surveillance of suspicious objects. It is because of the presence of controlled conditions while gathering mug shots. The proposed hybrid face detection/object detection-based video surveillance system was tested under very robust conditions in a cloud-IoT (Internet of Things) integrated distributive computing environment and it is envisaged in the experimental study that the real-world performance of the proposed hybrid system will have far better accuracy than the existing systems. The authors observed during experiments that the proposed cloud-IoT integrated hybrid approach for video surveillance of suspicious objects in a distributed real-time environment has high accuracy, low (overall error rate), and very high (average recall rate) for self-generated real-time datasets and benchmark datasets of KTH, COCO Detection, ImageNet, Kaggle, Vlomonaco Github, Pascal VOC, UCF-11, AVA, Collective Action, MSRA-B, SegTrack, ViSal, SegTrack V2, VOS, VOS-N, and DAVIS. The results of the proposed cloud-IoT integrated video surveillance system re-insure that the choices selected by the authors are reliable, efficient, and robust. Further, these results are adequate for the surveillance of suspicious objects using UAV video clips/images in a cloud-IoT integrated environment but further improvements in results are always possible and enviable.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Cloud-IoT environment</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Distributive environment</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">OpenCV software tools</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Suspicious object detection</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Video frames</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Video surveillance</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Wireless networks</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Mishra, Kamta Nath</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Cluster computing</subfield><subfield code="d">Dordrecht [u.a.] : Springer Science + Business Media B.V, 1998</subfield><subfield code="g">27(2023), 1 vom: 17. Feb., Seite 761-785</subfield><subfield code="w">(DE-627)320505332</subfield><subfield code="w">(DE-600)2012757-1</subfield><subfield code="x">1573-7543</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:27</subfield><subfield code="g">year:2023</subfield><subfield code="g">number:1</subfield><subfield code="g">day:17</subfield><subfield code="g">month:02</subfield><subfield code="g">pages:761-785</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.1007/s10586-023-03977-0</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_138</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_281</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2031</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2039</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2093</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2107</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2188</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2446</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2472</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2548</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4246</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4328</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">27</subfield><subfield code="j">2023</subfield><subfield code="e">1</subfield><subfield code="b">17</subfield><subfield code="c">02</subfield><subfield code="h">761-785</subfield></datafield></record></collection>
|
score |
7.399349 |