Semantic Segmentation Based Crowd Tracking and Anomaly Detection via Neuro-fuzzy Classifier in Smart Surveillance System
Abstract Crowd tracking and analysis of crowd behavior is a challenging research area in computer vision. In today’s crowded environment manual surveillance systems are inefficient, labor-intensive, and unwieldy. Automated video surveillance systems offer promising solutions to these problems and he...
Ausführliche Beschreibung
Autor*in: |
Abdullah, Faisal [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022 |
---|
Schlagwörter: |
---|
Anmerkung: |
© King Fahd University of Petroleum & Minerals 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
---|
Übergeordnetes Werk: |
Enthalten in: The Arabian journal for science and engineering - Berlin : Springer, 2011, 48(2022), 2 vom: 25. Aug., Seite 2173-2190 |
---|---|
Übergeordnetes Werk: |
volume:48 ; year:2022 ; number:2 ; day:25 ; month:08 ; pages:2173-2190 |
Links: |
---|
DOI / URN: |
10.1007/s13369-022-07092-x |
---|
Katalog-ID: |
SPR049282018 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | SPR049282018 | ||
003 | DE-627 | ||
005 | 20230510062450.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230209s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1007/s13369-022-07092-x |2 doi | |
035 | |a (DE-627)SPR049282018 | ||
035 | |a (SPR)s13369-022-07092-x-e | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Abdullah, Faisal |e verfasserin |4 aut | |
245 | 1 | 0 | |a Semantic Segmentation Based Crowd Tracking and Anomaly Detection via Neuro-fuzzy Classifier in Smart Surveillance System |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
500 | |a © King Fahd University of Petroleum & Minerals 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. | ||
520 | |a Abstract Crowd tracking and analysis of crowd behavior is a challenging research area in computer vision. In today’s crowded environment manual surveillance systems are inefficient, labor-intensive, and unwieldy. Automated video surveillance systems offer promising solutions to these problems and hence become a need for today’s environment. However, challenges remain. The most important challenge is the extraction of foreground representing human pixels only, also the extraction of robust spatial and temporal descriptors along with potent classifier is an essential part for accurate behavior detection. In this paper, we present our approach to these challenges by inventing semantic segmentation for foreground extraction. Furthermore, for pedestrians counting and tracking we introduced a fusion of human motion analysis and attraction force model by the weighted averaging method that removes non-humans and non-pedestrians from the scene. The verified pedestrians are counted using a fuzzy-c-mean algorithm and tracked via Hungarian algorithm association along with dynamic template matching technique. However, for anomaly detection after silhouettes extraction, we invent robust Spatio-temporal descriptors including crowd shape deformation, silhouette slicing, particles convection, dominant motion, and energy descriptors. That we optimized using an adaptive genetic algorithm and finally, multi-fused optimal features are fed to a multilayer neuro-fuzzy classifier for decision making. The proposed system is validated via extensive experimentations and achieved an accuracy of 91.8% and 89.16% over UCSD and Mall datasets for crowd tracking. However, the mean absolute error and mean square error for pedestrian counting are 1.69 and 2.09 over UCSD dataset and 2.57 and 4.34 for Mall dataset, respectively. An accuracy of 96.5% and 94% is achieved over UMN and MED datasets for anomaly detection. | ||
650 | 4 | |a Attraction force model |7 (dpeaa)DE-He213 | |
650 | 4 | |a Crowd shape deformation |7 (dpeaa)DE-He213 | |
650 | 4 | |a Multilayer neuro-fuzzy classifier |7 (dpeaa)DE-He213 | |
650 | 4 | |a Semantic segmentation |7 (dpeaa)DE-He213 | |
650 | 4 | |a Time-domain descriptors |7 (dpeaa)DE-He213 | |
650 | 4 | |a Tracking and anomaly detection |7 (dpeaa)DE-He213 | |
700 | 1 | |a Jalal, Ahmad |0 (orcid)0000-0002-6998-3784 |4 aut | |
773 | 0 | 8 | |i Enthalten in |t The Arabian journal for science and engineering |d Berlin : Springer, 2011 |g 48(2022), 2 vom: 25. Aug., Seite 2173-2190 |w (DE-627)588780731 |w (DE-600)2471504-9 |x 2191-4281 |7 nnns |
773 | 1 | 8 | |g volume:48 |g year:2022 |g number:2 |g day:25 |g month:08 |g pages:2173-2190 |
856 | 4 | 0 | |u https://dx.doi.org/10.1007/s13369-022-07092-x |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_SPRINGER | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_120 | ||
912 | |a GBV_ILN_138 | ||
912 | |a GBV_ILN_150 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_152 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_171 | ||
912 | |a GBV_ILN_187 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_250 | ||
912 | |a GBV_ILN_281 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_636 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2001 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2006 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2008 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2031 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2037 | ||
912 | |a GBV_ILN_2038 | ||
912 | |a GBV_ILN_2039 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2057 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2065 | ||
912 | |a GBV_ILN_2068 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2093 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2107 | ||
912 | |a GBV_ILN_2108 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2113 | ||
912 | |a GBV_ILN_2118 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2144 | ||
912 | |a GBV_ILN_2147 | ||
912 | |a GBV_ILN_2148 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2188 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2446 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2472 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_2522 | ||
912 | |a GBV_ILN_2548 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4046 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4246 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4328 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4336 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 48 |j 2022 |e 2 |b 25 |c 08 |h 2173-2190 |
author_variant |
f a fa a j aj |
---|---|
matchkey_str |
article:21914281:2022----::eatcemnainaecodrcignaoayeetovaerfzylsii |
hierarchy_sort_str |
2022 |
publishDate |
2022 |
allfields |
10.1007/s13369-022-07092-x doi (DE-627)SPR049282018 (SPR)s13369-022-07092-x-e DE-627 ger DE-627 rakwb eng Abdullah, Faisal verfasserin aut Semantic Segmentation Based Crowd Tracking and Anomaly Detection via Neuro-fuzzy Classifier in Smart Surveillance System 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © King Fahd University of Petroleum & Minerals 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Abstract Crowd tracking and analysis of crowd behavior is a challenging research area in computer vision. In today’s crowded environment manual surveillance systems are inefficient, labor-intensive, and unwieldy. Automated video surveillance systems offer promising solutions to these problems and hence become a need for today’s environment. However, challenges remain. The most important challenge is the extraction of foreground representing human pixels only, also the extraction of robust spatial and temporal descriptors along with potent classifier is an essential part for accurate behavior detection. In this paper, we present our approach to these challenges by inventing semantic segmentation for foreground extraction. Furthermore, for pedestrians counting and tracking we introduced a fusion of human motion analysis and attraction force model by the weighted averaging method that removes non-humans and non-pedestrians from the scene. The verified pedestrians are counted using a fuzzy-c-mean algorithm and tracked via Hungarian algorithm association along with dynamic template matching technique. However, for anomaly detection after silhouettes extraction, we invent robust Spatio-temporal descriptors including crowd shape deformation, silhouette slicing, particles convection, dominant motion, and energy descriptors. That we optimized using an adaptive genetic algorithm and finally, multi-fused optimal features are fed to a multilayer neuro-fuzzy classifier for decision making. The proposed system is validated via extensive experimentations and achieved an accuracy of 91.8% and 89.16% over UCSD and Mall datasets for crowd tracking. However, the mean absolute error and mean square error for pedestrian counting are 1.69 and 2.09 over UCSD dataset and 2.57 and 4.34 for Mall dataset, respectively. An accuracy of 96.5% and 94% is achieved over UMN and MED datasets for anomaly detection. Attraction force model (dpeaa)DE-He213 Crowd shape deformation (dpeaa)DE-He213 Multilayer neuro-fuzzy classifier (dpeaa)DE-He213 Semantic segmentation (dpeaa)DE-He213 Time-domain descriptors (dpeaa)DE-He213 Tracking and anomaly detection (dpeaa)DE-He213 Jalal, Ahmad (orcid)0000-0002-6998-3784 aut Enthalten in The Arabian journal for science and engineering Berlin : Springer, 2011 48(2022), 2 vom: 25. Aug., Seite 2173-2190 (DE-627)588780731 (DE-600)2471504-9 2191-4281 nnns volume:48 year:2022 number:2 day:25 month:08 pages:2173-2190 https://dx.doi.org/10.1007/s13369-022-07092-x lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 48 2022 2 25 08 2173-2190 |
spelling |
10.1007/s13369-022-07092-x doi (DE-627)SPR049282018 (SPR)s13369-022-07092-x-e DE-627 ger DE-627 rakwb eng Abdullah, Faisal verfasserin aut Semantic Segmentation Based Crowd Tracking and Anomaly Detection via Neuro-fuzzy Classifier in Smart Surveillance System 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © King Fahd University of Petroleum & Minerals 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Abstract Crowd tracking and analysis of crowd behavior is a challenging research area in computer vision. In today’s crowded environment manual surveillance systems are inefficient, labor-intensive, and unwieldy. Automated video surveillance systems offer promising solutions to these problems and hence become a need for today’s environment. However, challenges remain. The most important challenge is the extraction of foreground representing human pixels only, also the extraction of robust spatial and temporal descriptors along with potent classifier is an essential part for accurate behavior detection. In this paper, we present our approach to these challenges by inventing semantic segmentation for foreground extraction. Furthermore, for pedestrians counting and tracking we introduced a fusion of human motion analysis and attraction force model by the weighted averaging method that removes non-humans and non-pedestrians from the scene. The verified pedestrians are counted using a fuzzy-c-mean algorithm and tracked via Hungarian algorithm association along with dynamic template matching technique. However, for anomaly detection after silhouettes extraction, we invent robust Spatio-temporal descriptors including crowd shape deformation, silhouette slicing, particles convection, dominant motion, and energy descriptors. That we optimized using an adaptive genetic algorithm and finally, multi-fused optimal features are fed to a multilayer neuro-fuzzy classifier for decision making. The proposed system is validated via extensive experimentations and achieved an accuracy of 91.8% and 89.16% over UCSD and Mall datasets for crowd tracking. However, the mean absolute error and mean square error for pedestrian counting are 1.69 and 2.09 over UCSD dataset and 2.57 and 4.34 for Mall dataset, respectively. An accuracy of 96.5% and 94% is achieved over UMN and MED datasets for anomaly detection. Attraction force model (dpeaa)DE-He213 Crowd shape deformation (dpeaa)DE-He213 Multilayer neuro-fuzzy classifier (dpeaa)DE-He213 Semantic segmentation (dpeaa)DE-He213 Time-domain descriptors (dpeaa)DE-He213 Tracking and anomaly detection (dpeaa)DE-He213 Jalal, Ahmad (orcid)0000-0002-6998-3784 aut Enthalten in The Arabian journal for science and engineering Berlin : Springer, 2011 48(2022), 2 vom: 25. Aug., Seite 2173-2190 (DE-627)588780731 (DE-600)2471504-9 2191-4281 nnns volume:48 year:2022 number:2 day:25 month:08 pages:2173-2190 https://dx.doi.org/10.1007/s13369-022-07092-x lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 48 2022 2 25 08 2173-2190 |
allfields_unstemmed |
10.1007/s13369-022-07092-x doi (DE-627)SPR049282018 (SPR)s13369-022-07092-x-e DE-627 ger DE-627 rakwb eng Abdullah, Faisal verfasserin aut Semantic Segmentation Based Crowd Tracking and Anomaly Detection via Neuro-fuzzy Classifier in Smart Surveillance System 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © King Fahd University of Petroleum & Minerals 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Abstract Crowd tracking and analysis of crowd behavior is a challenging research area in computer vision. In today’s crowded environment manual surveillance systems are inefficient, labor-intensive, and unwieldy. Automated video surveillance systems offer promising solutions to these problems and hence become a need for today’s environment. However, challenges remain. The most important challenge is the extraction of foreground representing human pixels only, also the extraction of robust spatial and temporal descriptors along with potent classifier is an essential part for accurate behavior detection. In this paper, we present our approach to these challenges by inventing semantic segmentation for foreground extraction. Furthermore, for pedestrians counting and tracking we introduced a fusion of human motion analysis and attraction force model by the weighted averaging method that removes non-humans and non-pedestrians from the scene. The verified pedestrians are counted using a fuzzy-c-mean algorithm and tracked via Hungarian algorithm association along with dynamic template matching technique. However, for anomaly detection after silhouettes extraction, we invent robust Spatio-temporal descriptors including crowd shape deformation, silhouette slicing, particles convection, dominant motion, and energy descriptors. That we optimized using an adaptive genetic algorithm and finally, multi-fused optimal features are fed to a multilayer neuro-fuzzy classifier for decision making. The proposed system is validated via extensive experimentations and achieved an accuracy of 91.8% and 89.16% over UCSD and Mall datasets for crowd tracking. However, the mean absolute error and mean square error for pedestrian counting are 1.69 and 2.09 over UCSD dataset and 2.57 and 4.34 for Mall dataset, respectively. An accuracy of 96.5% and 94% is achieved over UMN and MED datasets for anomaly detection. Attraction force model (dpeaa)DE-He213 Crowd shape deformation (dpeaa)DE-He213 Multilayer neuro-fuzzy classifier (dpeaa)DE-He213 Semantic segmentation (dpeaa)DE-He213 Time-domain descriptors (dpeaa)DE-He213 Tracking and anomaly detection (dpeaa)DE-He213 Jalal, Ahmad (orcid)0000-0002-6998-3784 aut Enthalten in The Arabian journal for science and engineering Berlin : Springer, 2011 48(2022), 2 vom: 25. Aug., Seite 2173-2190 (DE-627)588780731 (DE-600)2471504-9 2191-4281 nnns volume:48 year:2022 number:2 day:25 month:08 pages:2173-2190 https://dx.doi.org/10.1007/s13369-022-07092-x lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 48 2022 2 25 08 2173-2190 |
allfieldsGer |
10.1007/s13369-022-07092-x doi (DE-627)SPR049282018 (SPR)s13369-022-07092-x-e DE-627 ger DE-627 rakwb eng Abdullah, Faisal verfasserin aut Semantic Segmentation Based Crowd Tracking and Anomaly Detection via Neuro-fuzzy Classifier in Smart Surveillance System 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © King Fahd University of Petroleum & Minerals 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Abstract Crowd tracking and analysis of crowd behavior is a challenging research area in computer vision. In today’s crowded environment manual surveillance systems are inefficient, labor-intensive, and unwieldy. Automated video surveillance systems offer promising solutions to these problems and hence become a need for today’s environment. However, challenges remain. The most important challenge is the extraction of foreground representing human pixels only, also the extraction of robust spatial and temporal descriptors along with potent classifier is an essential part for accurate behavior detection. In this paper, we present our approach to these challenges by inventing semantic segmentation for foreground extraction. Furthermore, for pedestrians counting and tracking we introduced a fusion of human motion analysis and attraction force model by the weighted averaging method that removes non-humans and non-pedestrians from the scene. The verified pedestrians are counted using a fuzzy-c-mean algorithm and tracked via Hungarian algorithm association along with dynamic template matching technique. However, for anomaly detection after silhouettes extraction, we invent robust Spatio-temporal descriptors including crowd shape deformation, silhouette slicing, particles convection, dominant motion, and energy descriptors. That we optimized using an adaptive genetic algorithm and finally, multi-fused optimal features are fed to a multilayer neuro-fuzzy classifier for decision making. The proposed system is validated via extensive experimentations and achieved an accuracy of 91.8% and 89.16% over UCSD and Mall datasets for crowd tracking. However, the mean absolute error and mean square error for pedestrian counting are 1.69 and 2.09 over UCSD dataset and 2.57 and 4.34 for Mall dataset, respectively. An accuracy of 96.5% and 94% is achieved over UMN and MED datasets for anomaly detection. Attraction force model (dpeaa)DE-He213 Crowd shape deformation (dpeaa)DE-He213 Multilayer neuro-fuzzy classifier (dpeaa)DE-He213 Semantic segmentation (dpeaa)DE-He213 Time-domain descriptors (dpeaa)DE-He213 Tracking and anomaly detection (dpeaa)DE-He213 Jalal, Ahmad (orcid)0000-0002-6998-3784 aut Enthalten in The Arabian journal for science and engineering Berlin : Springer, 2011 48(2022), 2 vom: 25. Aug., Seite 2173-2190 (DE-627)588780731 (DE-600)2471504-9 2191-4281 nnns volume:48 year:2022 number:2 day:25 month:08 pages:2173-2190 https://dx.doi.org/10.1007/s13369-022-07092-x lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 48 2022 2 25 08 2173-2190 |
allfieldsSound |
10.1007/s13369-022-07092-x doi (DE-627)SPR049282018 (SPR)s13369-022-07092-x-e DE-627 ger DE-627 rakwb eng Abdullah, Faisal verfasserin aut Semantic Segmentation Based Crowd Tracking and Anomaly Detection via Neuro-fuzzy Classifier in Smart Surveillance System 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © King Fahd University of Petroleum & Minerals 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Abstract Crowd tracking and analysis of crowd behavior is a challenging research area in computer vision. In today’s crowded environment manual surveillance systems are inefficient, labor-intensive, and unwieldy. Automated video surveillance systems offer promising solutions to these problems and hence become a need for today’s environment. However, challenges remain. The most important challenge is the extraction of foreground representing human pixels only, also the extraction of robust spatial and temporal descriptors along with potent classifier is an essential part for accurate behavior detection. In this paper, we present our approach to these challenges by inventing semantic segmentation for foreground extraction. Furthermore, for pedestrians counting and tracking we introduced a fusion of human motion analysis and attraction force model by the weighted averaging method that removes non-humans and non-pedestrians from the scene. The verified pedestrians are counted using a fuzzy-c-mean algorithm and tracked via Hungarian algorithm association along with dynamic template matching technique. However, for anomaly detection after silhouettes extraction, we invent robust Spatio-temporal descriptors including crowd shape deformation, silhouette slicing, particles convection, dominant motion, and energy descriptors. That we optimized using an adaptive genetic algorithm and finally, multi-fused optimal features are fed to a multilayer neuro-fuzzy classifier for decision making. The proposed system is validated via extensive experimentations and achieved an accuracy of 91.8% and 89.16% over UCSD and Mall datasets for crowd tracking. However, the mean absolute error and mean square error for pedestrian counting are 1.69 and 2.09 over UCSD dataset and 2.57 and 4.34 for Mall dataset, respectively. An accuracy of 96.5% and 94% is achieved over UMN and MED datasets for anomaly detection. Attraction force model (dpeaa)DE-He213 Crowd shape deformation (dpeaa)DE-He213 Multilayer neuro-fuzzy classifier (dpeaa)DE-He213 Semantic segmentation (dpeaa)DE-He213 Time-domain descriptors (dpeaa)DE-He213 Tracking and anomaly detection (dpeaa)DE-He213 Jalal, Ahmad (orcid)0000-0002-6998-3784 aut Enthalten in The Arabian journal for science and engineering Berlin : Springer, 2011 48(2022), 2 vom: 25. Aug., Seite 2173-2190 (DE-627)588780731 (DE-600)2471504-9 2191-4281 nnns volume:48 year:2022 number:2 day:25 month:08 pages:2173-2190 https://dx.doi.org/10.1007/s13369-022-07092-x lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 48 2022 2 25 08 2173-2190 |
language |
English |
source |
Enthalten in The Arabian journal for science and engineering 48(2022), 2 vom: 25. Aug., Seite 2173-2190 volume:48 year:2022 number:2 day:25 month:08 pages:2173-2190 |
sourceStr |
Enthalten in The Arabian journal for science and engineering 48(2022), 2 vom: 25. Aug., Seite 2173-2190 volume:48 year:2022 number:2 day:25 month:08 pages:2173-2190 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Attraction force model Crowd shape deformation Multilayer neuro-fuzzy classifier Semantic segmentation Time-domain descriptors Tracking and anomaly detection |
isfreeaccess_bool |
false |
container_title |
The Arabian journal for science and engineering |
authorswithroles_txt_mv |
Abdullah, Faisal @@aut@@ Jalal, Ahmad @@aut@@ |
publishDateDaySort_date |
2022-08-25T00:00:00Z |
hierarchy_top_id |
588780731 |
id |
SPR049282018 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">SPR049282018</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230510062450.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230209s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s13369-022-07092-x</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR049282018</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)s13369-022-07092-x-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Abdullah, Faisal</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Semantic Segmentation Based Crowd Tracking and Anomaly Detection via Neuro-fuzzy Classifier in Smart Surveillance System</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© King Fahd University of Petroleum & Minerals 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Crowd tracking and analysis of crowd behavior is a challenging research area in computer vision. In today’s crowded environment manual surveillance systems are inefficient, labor-intensive, and unwieldy. Automated video surveillance systems offer promising solutions to these problems and hence become a need for today’s environment. However, challenges remain. The most important challenge is the extraction of foreground representing human pixels only, also the extraction of robust spatial and temporal descriptors along with potent classifier is an essential part for accurate behavior detection. In this paper, we present our approach to these challenges by inventing semantic segmentation for foreground extraction. Furthermore, for pedestrians counting and tracking we introduced a fusion of human motion analysis and attraction force model by the weighted averaging method that removes non-humans and non-pedestrians from the scene. The verified pedestrians are counted using a fuzzy-c-mean algorithm and tracked via Hungarian algorithm association along with dynamic template matching technique. However, for anomaly detection after silhouettes extraction, we invent robust Spatio-temporal descriptors including crowd shape deformation, silhouette slicing, particles convection, dominant motion, and energy descriptors. That we optimized using an adaptive genetic algorithm and finally, multi-fused optimal features are fed to a multilayer neuro-fuzzy classifier for decision making. The proposed system is validated via extensive experimentations and achieved an accuracy of 91.8% and 89.16% over UCSD and Mall datasets for crowd tracking. However, the mean absolute error and mean square error for pedestrian counting are 1.69 and 2.09 over UCSD dataset and 2.57 and 4.34 for Mall dataset, respectively. An accuracy of 96.5% and 94% is achieved over UMN and MED datasets for anomaly detection.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Attraction force model</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Crowd shape deformation</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multilayer neuro-fuzzy classifier</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Semantic segmentation</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Time-domain descriptors</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Tracking and anomaly detection</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Jalal, Ahmad</subfield><subfield code="0">(orcid)0000-0002-6998-3784</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">The Arabian journal for science and engineering</subfield><subfield code="d">Berlin : Springer, 2011</subfield><subfield code="g">48(2022), 2 vom: 25. Aug., Seite 2173-2190</subfield><subfield code="w">(DE-627)588780731</subfield><subfield code="w">(DE-600)2471504-9</subfield><subfield code="x">2191-4281</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:48</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:2</subfield><subfield code="g">day:25</subfield><subfield code="g">month:08</subfield><subfield code="g">pages:2173-2190</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.1007/s13369-022-07092-x</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_138</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_281</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2031</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2039</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2093</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2107</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2188</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2446</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2472</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2548</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4246</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4328</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">48</subfield><subfield code="j">2022</subfield><subfield code="e">2</subfield><subfield code="b">25</subfield><subfield code="c">08</subfield><subfield code="h">2173-2190</subfield></datafield></record></collection>
|
author |
Abdullah, Faisal |
spellingShingle |
Abdullah, Faisal misc Attraction force model misc Crowd shape deformation misc Multilayer neuro-fuzzy classifier misc Semantic segmentation misc Time-domain descriptors misc Tracking and anomaly detection Semantic Segmentation Based Crowd Tracking and Anomaly Detection via Neuro-fuzzy Classifier in Smart Surveillance System |
authorStr |
Abdullah, Faisal |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)588780731 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut |
collection |
springer |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
2191-4281 |
topic_title |
Semantic Segmentation Based Crowd Tracking and Anomaly Detection via Neuro-fuzzy Classifier in Smart Surveillance System Attraction force model (dpeaa)DE-He213 Crowd shape deformation (dpeaa)DE-He213 Multilayer neuro-fuzzy classifier (dpeaa)DE-He213 Semantic segmentation (dpeaa)DE-He213 Time-domain descriptors (dpeaa)DE-He213 Tracking and anomaly detection (dpeaa)DE-He213 |
topic |
misc Attraction force model misc Crowd shape deformation misc Multilayer neuro-fuzzy classifier misc Semantic segmentation misc Time-domain descriptors misc Tracking and anomaly detection |
topic_unstemmed |
misc Attraction force model misc Crowd shape deformation misc Multilayer neuro-fuzzy classifier misc Semantic segmentation misc Time-domain descriptors misc Tracking and anomaly detection |
topic_browse |
misc Attraction force model misc Crowd shape deformation misc Multilayer neuro-fuzzy classifier misc Semantic segmentation misc Time-domain descriptors misc Tracking and anomaly detection |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
The Arabian journal for science and engineering |
hierarchy_parent_id |
588780731 |
hierarchy_top_title |
The Arabian journal for science and engineering |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)588780731 (DE-600)2471504-9 |
title |
Semantic Segmentation Based Crowd Tracking and Anomaly Detection via Neuro-fuzzy Classifier in Smart Surveillance System |
ctrlnum |
(DE-627)SPR049282018 (SPR)s13369-022-07092-x-e |
title_full |
Semantic Segmentation Based Crowd Tracking and Anomaly Detection via Neuro-fuzzy Classifier in Smart Surveillance System |
author_sort |
Abdullah, Faisal |
journal |
The Arabian journal for science and engineering |
journalStr |
The Arabian journal for science and engineering |
lang_code |
eng |
isOA_bool |
false |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
txt |
container_start_page |
2173 |
author_browse |
Abdullah, Faisal Jalal, Ahmad |
container_volume |
48 |
format_se |
Elektronische Aufsätze |
author-letter |
Abdullah, Faisal |
doi_str_mv |
10.1007/s13369-022-07092-x |
normlink |
(ORCID)0000-0002-6998-3784 |
normlink_prefix_str_mv |
(orcid)0000-0002-6998-3784 |
title_sort |
semantic segmentation based crowd tracking and anomaly detection via neuro-fuzzy classifier in smart surveillance system |
title_auth |
Semantic Segmentation Based Crowd Tracking and Anomaly Detection via Neuro-fuzzy Classifier in Smart Surveillance System |
abstract |
Abstract Crowd tracking and analysis of crowd behavior is a challenging research area in computer vision. In today’s crowded environment manual surveillance systems are inefficient, labor-intensive, and unwieldy. Automated video surveillance systems offer promising solutions to these problems and hence become a need for today’s environment. However, challenges remain. The most important challenge is the extraction of foreground representing human pixels only, also the extraction of robust spatial and temporal descriptors along with potent classifier is an essential part for accurate behavior detection. In this paper, we present our approach to these challenges by inventing semantic segmentation for foreground extraction. Furthermore, for pedestrians counting and tracking we introduced a fusion of human motion analysis and attraction force model by the weighted averaging method that removes non-humans and non-pedestrians from the scene. The verified pedestrians are counted using a fuzzy-c-mean algorithm and tracked via Hungarian algorithm association along with dynamic template matching technique. However, for anomaly detection after silhouettes extraction, we invent robust Spatio-temporal descriptors including crowd shape deformation, silhouette slicing, particles convection, dominant motion, and energy descriptors. That we optimized using an adaptive genetic algorithm and finally, multi-fused optimal features are fed to a multilayer neuro-fuzzy classifier for decision making. The proposed system is validated via extensive experimentations and achieved an accuracy of 91.8% and 89.16% over UCSD and Mall datasets for crowd tracking. However, the mean absolute error and mean square error for pedestrian counting are 1.69 and 2.09 over UCSD dataset and 2.57 and 4.34 for Mall dataset, respectively. An accuracy of 96.5% and 94% is achieved over UMN and MED datasets for anomaly detection. © King Fahd University of Petroleum & Minerals 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
abstractGer |
Abstract Crowd tracking and analysis of crowd behavior is a challenging research area in computer vision. In today’s crowded environment manual surveillance systems are inefficient, labor-intensive, and unwieldy. Automated video surveillance systems offer promising solutions to these problems and hence become a need for today’s environment. However, challenges remain. The most important challenge is the extraction of foreground representing human pixels only, also the extraction of robust spatial and temporal descriptors along with potent classifier is an essential part for accurate behavior detection. In this paper, we present our approach to these challenges by inventing semantic segmentation for foreground extraction. Furthermore, for pedestrians counting and tracking we introduced a fusion of human motion analysis and attraction force model by the weighted averaging method that removes non-humans and non-pedestrians from the scene. The verified pedestrians are counted using a fuzzy-c-mean algorithm and tracked via Hungarian algorithm association along with dynamic template matching technique. However, for anomaly detection after silhouettes extraction, we invent robust Spatio-temporal descriptors including crowd shape deformation, silhouette slicing, particles convection, dominant motion, and energy descriptors. That we optimized using an adaptive genetic algorithm and finally, multi-fused optimal features are fed to a multilayer neuro-fuzzy classifier for decision making. The proposed system is validated via extensive experimentations and achieved an accuracy of 91.8% and 89.16% over UCSD and Mall datasets for crowd tracking. However, the mean absolute error and mean square error for pedestrian counting are 1.69 and 2.09 over UCSD dataset and 2.57 and 4.34 for Mall dataset, respectively. An accuracy of 96.5% and 94% is achieved over UMN and MED datasets for anomaly detection. © King Fahd University of Petroleum & Minerals 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
abstract_unstemmed |
Abstract Crowd tracking and analysis of crowd behavior is a challenging research area in computer vision. In today’s crowded environment manual surveillance systems are inefficient, labor-intensive, and unwieldy. Automated video surveillance systems offer promising solutions to these problems and hence become a need for today’s environment. However, challenges remain. The most important challenge is the extraction of foreground representing human pixels only, also the extraction of robust spatial and temporal descriptors along with potent classifier is an essential part for accurate behavior detection. In this paper, we present our approach to these challenges by inventing semantic segmentation for foreground extraction. Furthermore, for pedestrians counting and tracking we introduced a fusion of human motion analysis and attraction force model by the weighted averaging method that removes non-humans and non-pedestrians from the scene. The verified pedestrians are counted using a fuzzy-c-mean algorithm and tracked via Hungarian algorithm association along with dynamic template matching technique. However, for anomaly detection after silhouettes extraction, we invent robust Spatio-temporal descriptors including crowd shape deformation, silhouette slicing, particles convection, dominant motion, and energy descriptors. That we optimized using an adaptive genetic algorithm and finally, multi-fused optimal features are fed to a multilayer neuro-fuzzy classifier for decision making. The proposed system is validated via extensive experimentations and achieved an accuracy of 91.8% and 89.16% over UCSD and Mall datasets for crowd tracking. However, the mean absolute error and mean square error for pedestrian counting are 1.69 and 2.09 over UCSD dataset and 2.57 and 4.34 for Mall dataset, respectively. An accuracy of 96.5% and 94% is achieved over UMN and MED datasets for anomaly detection. © King Fahd University of Petroleum & Minerals 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 |
container_issue |
2 |
title_short |
Semantic Segmentation Based Crowd Tracking and Anomaly Detection via Neuro-fuzzy Classifier in Smart Surveillance System |
url |
https://dx.doi.org/10.1007/s13369-022-07092-x |
remote_bool |
true |
author2 |
Jalal, Ahmad |
author2Str |
Jalal, Ahmad |
ppnlink |
588780731 |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s13369-022-07092-x |
up_date |
2024-07-04T00:10:00.879Z |
_version_ |
1803605037160595456 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">SPR049282018</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230510062450.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230209s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s13369-022-07092-x</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR049282018</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)s13369-022-07092-x-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Abdullah, Faisal</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Semantic Segmentation Based Crowd Tracking and Anomaly Detection via Neuro-fuzzy Classifier in Smart Surveillance System</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© King Fahd University of Petroleum & Minerals 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Crowd tracking and analysis of crowd behavior is a challenging research area in computer vision. In today’s crowded environment manual surveillance systems are inefficient, labor-intensive, and unwieldy. Automated video surveillance systems offer promising solutions to these problems and hence become a need for today’s environment. However, challenges remain. The most important challenge is the extraction of foreground representing human pixels only, also the extraction of robust spatial and temporal descriptors along with potent classifier is an essential part for accurate behavior detection. In this paper, we present our approach to these challenges by inventing semantic segmentation for foreground extraction. Furthermore, for pedestrians counting and tracking we introduced a fusion of human motion analysis and attraction force model by the weighted averaging method that removes non-humans and non-pedestrians from the scene. The verified pedestrians are counted using a fuzzy-c-mean algorithm and tracked via Hungarian algorithm association along with dynamic template matching technique. However, for anomaly detection after silhouettes extraction, we invent robust Spatio-temporal descriptors including crowd shape deformation, silhouette slicing, particles convection, dominant motion, and energy descriptors. That we optimized using an adaptive genetic algorithm and finally, multi-fused optimal features are fed to a multilayer neuro-fuzzy classifier for decision making. The proposed system is validated via extensive experimentations and achieved an accuracy of 91.8% and 89.16% over UCSD and Mall datasets for crowd tracking. However, the mean absolute error and mean square error for pedestrian counting are 1.69 and 2.09 over UCSD dataset and 2.57 and 4.34 for Mall dataset, respectively. An accuracy of 96.5% and 94% is achieved over UMN and MED datasets for anomaly detection.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Attraction force model</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Crowd shape deformation</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multilayer neuro-fuzzy classifier</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Semantic segmentation</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Time-domain descriptors</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Tracking and anomaly detection</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Jalal, Ahmad</subfield><subfield code="0">(orcid)0000-0002-6998-3784</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">The Arabian journal for science and engineering</subfield><subfield code="d">Berlin : Springer, 2011</subfield><subfield code="g">48(2022), 2 vom: 25. Aug., Seite 2173-2190</subfield><subfield code="w">(DE-627)588780731</subfield><subfield code="w">(DE-600)2471504-9</subfield><subfield code="x">2191-4281</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:48</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:2</subfield><subfield code="g">day:25</subfield><subfield code="g">month:08</subfield><subfield code="g">pages:2173-2190</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.1007/s13369-022-07092-x</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_138</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_281</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2031</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2039</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2093</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2107</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2188</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2446</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2472</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2548</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4246</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4328</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">48</subfield><subfield code="j">2022</subfield><subfield code="e">2</subfield><subfield code="b">25</subfield><subfield code="c">08</subfield><subfield code="h">2173-2190</subfield></datafield></record></collection>
|
score |
7.400259 |