Multi-class object detection system using hybrid convolutional neural network architecture
Abstract Object detection in computer vision has been a significant research area for the past decade. Identifying objects with multiple classes from an image has attracted great attention because it can effectively classify and detect the image. A multi-class object detection system from a video or...
Ausführliche Beschreibung
Autor*in: |
Borade, Jay Laxman [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022 |
---|
Schlagwörter: |
---|
Anmerkung: |
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 |
---|
Übergeordnetes Werk: |
Enthalten in: Multimedia tools and applications - Dordrecht [u.a.] : Springer Science + Business Media B.V, 1995, 81(2022), 22 vom: 11. Apr., Seite 31727-31751 |
---|---|
Übergeordnetes Werk: |
volume:81 ; year:2022 ; number:22 ; day:11 ; month:04 ; pages:31727-31751 |
Links: |
---|
DOI / URN: |
10.1007/s11042-022-13007-7 |
---|
Katalog-ID: |
SPR047908521 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | SPR047908521 | ||
003 | DE-627 | ||
005 | 20230509110131.0 | ||
007 | cr uuu---uuuuu | ||
008 | 220823s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1007/s11042-022-13007-7 |2 doi | |
035 | |a (DE-627)SPR047908521 | ||
035 | |a (SPR)s11042-022-13007-7-e | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Borade, Jay Laxman |e verfasserin |4 aut | |
245 | 1 | 0 | |a Multi-class object detection system using hybrid convolutional neural network architecture |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
500 | |a © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 | ||
520 | |a Abstract Object detection in computer vision has been a significant research area for the past decade. Identifying objects with multiple classes from an image has attracted great attention because it can effectively classify and detect the image. A multi-class object detection system from a video or image is quite challenging because of the errors obtained by the location classification process. Our proposed system generalized a hybrid convolutional neural network (H-CNN) model is used to realize the user object from an image. The proposed work integrates pre-processing, object localization, feature extraction and classification. First, the input image is pre-processed with Gaussian filtering to remove noise and improve the image quality. After completing the pre-processing procedure, it is subjected to object localization. Here the object in the image is localized using Grid Guided Localization (GGL). In the feature extraction phase, the model would be pre-trained with AlexNet. Here the AlexNet are generalized as fully connected (FC) layers. Finally, the Softmax layer in the AlexNet architecture is replaced by SVR (Support Vector Regression), which acts as a classifier for identifying the object class. The classification loss is minimized using the Improved Grey Wolf (IGW) optimization algorithm. Thus, the H-CNN model can quickly classify and label the objects from images. It also offers improved classification performance in managing effective training time. The proposed work will be implemented in PYTHON. Therefore, the model would be built using various datasets such as MIT-67, PASCAL VOC2010, MS (Microsoft)-COCO, and MSRC to effectively train and classify the object. The proposed H-CNN achieved improved results with MIT-67 (96.02%), PASCAL VOC2010 (95.04%), MSRC (97.37%), and MS COCO (94.53%). The results obtained by H-CNN proved that the excluded result of Mean Average Precision (mAP), Precision, Accuracy, Recall values and F1-Score achieved better results than with recently developed works such as YOLO-fine, EfficientDet, YOLOv4, RetinaNet, GCNet and HRNet architectures. | ||
650 | 4 | |a Image processing |7 (dpeaa)DE-He213 | |
650 | 4 | |a Object localization |7 (dpeaa)DE-He213 | |
650 | 4 | |a Deep learning |7 (dpeaa)DE-He213 | |
650 | 4 | |a Object recognition |7 (dpeaa)DE-He213 | |
650 | 4 | |a Machine learning |7 (dpeaa)DE-He213 | |
700 | 1 | |a Lakshmi, Muddana A |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Multimedia tools and applications |d Dordrecht [u.a.] : Springer Science + Business Media B.V, 1995 |g 81(2022), 22 vom: 11. Apr., Seite 31727-31751 |w (DE-627)27135030X |w (DE-600)1479928-5 |x 1573-7721 |7 nnns |
773 | 1 | 8 | |g volume:81 |g year:2022 |g number:22 |g day:11 |g month:04 |g pages:31727-31751 |
856 | 4 | 0 | |u https://dx.doi.org/10.1007/s11042-022-13007-7 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_SPRINGER | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_101 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_120 | ||
912 | |a GBV_ILN_138 | ||
912 | |a GBV_ILN_150 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_152 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_171 | ||
912 | |a GBV_ILN_187 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_250 | ||
912 | |a GBV_ILN_281 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_636 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2001 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2006 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2008 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2031 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2037 | ||
912 | |a GBV_ILN_2038 | ||
912 | |a GBV_ILN_2039 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2057 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2065 | ||
912 | |a GBV_ILN_2068 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2093 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2107 | ||
912 | |a GBV_ILN_2108 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2113 | ||
912 | |a GBV_ILN_2118 | ||
912 | |a GBV_ILN_2119 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2144 | ||
912 | |a GBV_ILN_2147 | ||
912 | |a GBV_ILN_2148 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2188 | ||
912 | |a GBV_ILN_2190 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2446 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2472 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_2522 | ||
912 | |a GBV_ILN_2548 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4046 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4246 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4328 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4336 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 81 |j 2022 |e 22 |b 11 |c 04 |h 31727-31751 |
author_variant |
j l b jl jlb m a l ma mal |
---|---|
matchkey_str |
article:15737721:2022----::utcasbeteetosseuigyrdovltoanua |
hierarchy_sort_str |
2022 |
publishDate |
2022 |
allfields |
10.1007/s11042-022-13007-7 doi (DE-627)SPR047908521 (SPR)s11042-022-13007-7-e DE-627 ger DE-627 rakwb eng Borade, Jay Laxman verfasserin aut Multi-class object detection system using hybrid convolutional neural network architecture 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 Abstract Object detection in computer vision has been a significant research area for the past decade. Identifying objects with multiple classes from an image has attracted great attention because it can effectively classify and detect the image. A multi-class object detection system from a video or image is quite challenging because of the errors obtained by the location classification process. Our proposed system generalized a hybrid convolutional neural network (H-CNN) model is used to realize the user object from an image. The proposed work integrates pre-processing, object localization, feature extraction and classification. First, the input image is pre-processed with Gaussian filtering to remove noise and improve the image quality. After completing the pre-processing procedure, it is subjected to object localization. Here the object in the image is localized using Grid Guided Localization (GGL). In the feature extraction phase, the model would be pre-trained with AlexNet. Here the AlexNet are generalized as fully connected (FC) layers. Finally, the Softmax layer in the AlexNet architecture is replaced by SVR (Support Vector Regression), which acts as a classifier for identifying the object class. The classification loss is minimized using the Improved Grey Wolf (IGW) optimization algorithm. Thus, the H-CNN model can quickly classify and label the objects from images. It also offers improved classification performance in managing effective training time. The proposed work will be implemented in PYTHON. Therefore, the model would be built using various datasets such as MIT-67, PASCAL VOC2010, MS (Microsoft)-COCO, and MSRC to effectively train and classify the object. The proposed H-CNN achieved improved results with MIT-67 (96.02%), PASCAL VOC2010 (95.04%), MSRC (97.37%), and MS COCO (94.53%). The results obtained by H-CNN proved that the excluded result of Mean Average Precision (mAP), Precision, Accuracy, Recall values and F1-Score achieved better results than with recently developed works such as YOLO-fine, EfficientDet, YOLOv4, RetinaNet, GCNet and HRNet architectures. Image processing (dpeaa)DE-He213 Object localization (dpeaa)DE-He213 Deep learning (dpeaa)DE-He213 Object recognition (dpeaa)DE-He213 Machine learning (dpeaa)DE-He213 Lakshmi, Muddana A aut Enthalten in Multimedia tools and applications Dordrecht [u.a.] : Springer Science + Business Media B.V, 1995 81(2022), 22 vom: 11. Apr., Seite 31727-31751 (DE-627)27135030X (DE-600)1479928-5 1573-7721 nnns volume:81 year:2022 number:22 day:11 month:04 pages:31727-31751 https://dx.doi.org/10.1007/s11042-022-13007-7 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2119 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 81 2022 22 11 04 31727-31751 |
spelling |
10.1007/s11042-022-13007-7 doi (DE-627)SPR047908521 (SPR)s11042-022-13007-7-e DE-627 ger DE-627 rakwb eng Borade, Jay Laxman verfasserin aut Multi-class object detection system using hybrid convolutional neural network architecture 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 Abstract Object detection in computer vision has been a significant research area for the past decade. Identifying objects with multiple classes from an image has attracted great attention because it can effectively classify and detect the image. A multi-class object detection system from a video or image is quite challenging because of the errors obtained by the location classification process. Our proposed system generalized a hybrid convolutional neural network (H-CNN) model is used to realize the user object from an image. The proposed work integrates pre-processing, object localization, feature extraction and classification. First, the input image is pre-processed with Gaussian filtering to remove noise and improve the image quality. After completing the pre-processing procedure, it is subjected to object localization. Here the object in the image is localized using Grid Guided Localization (GGL). In the feature extraction phase, the model would be pre-trained with AlexNet. Here the AlexNet are generalized as fully connected (FC) layers. Finally, the Softmax layer in the AlexNet architecture is replaced by SVR (Support Vector Regression), which acts as a classifier for identifying the object class. The classification loss is minimized using the Improved Grey Wolf (IGW) optimization algorithm. Thus, the H-CNN model can quickly classify and label the objects from images. It also offers improved classification performance in managing effective training time. The proposed work will be implemented in PYTHON. Therefore, the model would be built using various datasets such as MIT-67, PASCAL VOC2010, MS (Microsoft)-COCO, and MSRC to effectively train and classify the object. The proposed H-CNN achieved improved results with MIT-67 (96.02%), PASCAL VOC2010 (95.04%), MSRC (97.37%), and MS COCO (94.53%). The results obtained by H-CNN proved that the excluded result of Mean Average Precision (mAP), Precision, Accuracy, Recall values and F1-Score achieved better results than with recently developed works such as YOLO-fine, EfficientDet, YOLOv4, RetinaNet, GCNet and HRNet architectures. Image processing (dpeaa)DE-He213 Object localization (dpeaa)DE-He213 Deep learning (dpeaa)DE-He213 Object recognition (dpeaa)DE-He213 Machine learning (dpeaa)DE-He213 Lakshmi, Muddana A aut Enthalten in Multimedia tools and applications Dordrecht [u.a.] : Springer Science + Business Media B.V, 1995 81(2022), 22 vom: 11. Apr., Seite 31727-31751 (DE-627)27135030X (DE-600)1479928-5 1573-7721 nnns volume:81 year:2022 number:22 day:11 month:04 pages:31727-31751 https://dx.doi.org/10.1007/s11042-022-13007-7 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2119 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 81 2022 22 11 04 31727-31751 |
allfields_unstemmed |
10.1007/s11042-022-13007-7 doi (DE-627)SPR047908521 (SPR)s11042-022-13007-7-e DE-627 ger DE-627 rakwb eng Borade, Jay Laxman verfasserin aut Multi-class object detection system using hybrid convolutional neural network architecture 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 Abstract Object detection in computer vision has been a significant research area for the past decade. Identifying objects with multiple classes from an image has attracted great attention because it can effectively classify and detect the image. A multi-class object detection system from a video or image is quite challenging because of the errors obtained by the location classification process. Our proposed system generalized a hybrid convolutional neural network (H-CNN) model is used to realize the user object from an image. The proposed work integrates pre-processing, object localization, feature extraction and classification. First, the input image is pre-processed with Gaussian filtering to remove noise and improve the image quality. After completing the pre-processing procedure, it is subjected to object localization. Here the object in the image is localized using Grid Guided Localization (GGL). In the feature extraction phase, the model would be pre-trained with AlexNet. Here the AlexNet are generalized as fully connected (FC) layers. Finally, the Softmax layer in the AlexNet architecture is replaced by SVR (Support Vector Regression), which acts as a classifier for identifying the object class. The classification loss is minimized using the Improved Grey Wolf (IGW) optimization algorithm. Thus, the H-CNN model can quickly classify and label the objects from images. It also offers improved classification performance in managing effective training time. The proposed work will be implemented in PYTHON. Therefore, the model would be built using various datasets such as MIT-67, PASCAL VOC2010, MS (Microsoft)-COCO, and MSRC to effectively train and classify the object. The proposed H-CNN achieved improved results with MIT-67 (96.02%), PASCAL VOC2010 (95.04%), MSRC (97.37%), and MS COCO (94.53%). The results obtained by H-CNN proved that the excluded result of Mean Average Precision (mAP), Precision, Accuracy, Recall values and F1-Score achieved better results than with recently developed works such as YOLO-fine, EfficientDet, YOLOv4, RetinaNet, GCNet and HRNet architectures. Image processing (dpeaa)DE-He213 Object localization (dpeaa)DE-He213 Deep learning (dpeaa)DE-He213 Object recognition (dpeaa)DE-He213 Machine learning (dpeaa)DE-He213 Lakshmi, Muddana A aut Enthalten in Multimedia tools and applications Dordrecht [u.a.] : Springer Science + Business Media B.V, 1995 81(2022), 22 vom: 11. Apr., Seite 31727-31751 (DE-627)27135030X (DE-600)1479928-5 1573-7721 nnns volume:81 year:2022 number:22 day:11 month:04 pages:31727-31751 https://dx.doi.org/10.1007/s11042-022-13007-7 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2119 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 81 2022 22 11 04 31727-31751 |
allfieldsGer |
10.1007/s11042-022-13007-7 doi (DE-627)SPR047908521 (SPR)s11042-022-13007-7-e DE-627 ger DE-627 rakwb eng Borade, Jay Laxman verfasserin aut Multi-class object detection system using hybrid convolutional neural network architecture 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 Abstract Object detection in computer vision has been a significant research area for the past decade. Identifying objects with multiple classes from an image has attracted great attention because it can effectively classify and detect the image. A multi-class object detection system from a video or image is quite challenging because of the errors obtained by the location classification process. Our proposed system generalized a hybrid convolutional neural network (H-CNN) model is used to realize the user object from an image. The proposed work integrates pre-processing, object localization, feature extraction and classification. First, the input image is pre-processed with Gaussian filtering to remove noise and improve the image quality. After completing the pre-processing procedure, it is subjected to object localization. Here the object in the image is localized using Grid Guided Localization (GGL). In the feature extraction phase, the model would be pre-trained with AlexNet. Here the AlexNet are generalized as fully connected (FC) layers. Finally, the Softmax layer in the AlexNet architecture is replaced by SVR (Support Vector Regression), which acts as a classifier for identifying the object class. The classification loss is minimized using the Improved Grey Wolf (IGW) optimization algorithm. Thus, the H-CNN model can quickly classify and label the objects from images. It also offers improved classification performance in managing effective training time. The proposed work will be implemented in PYTHON. Therefore, the model would be built using various datasets such as MIT-67, PASCAL VOC2010, MS (Microsoft)-COCO, and MSRC to effectively train and classify the object. The proposed H-CNN achieved improved results with MIT-67 (96.02%), PASCAL VOC2010 (95.04%), MSRC (97.37%), and MS COCO (94.53%). The results obtained by H-CNN proved that the excluded result of Mean Average Precision (mAP), Precision, Accuracy, Recall values and F1-Score achieved better results than with recently developed works such as YOLO-fine, EfficientDet, YOLOv4, RetinaNet, GCNet and HRNet architectures. Image processing (dpeaa)DE-He213 Object localization (dpeaa)DE-He213 Deep learning (dpeaa)DE-He213 Object recognition (dpeaa)DE-He213 Machine learning (dpeaa)DE-He213 Lakshmi, Muddana A aut Enthalten in Multimedia tools and applications Dordrecht [u.a.] : Springer Science + Business Media B.V, 1995 81(2022), 22 vom: 11. Apr., Seite 31727-31751 (DE-627)27135030X (DE-600)1479928-5 1573-7721 nnns volume:81 year:2022 number:22 day:11 month:04 pages:31727-31751 https://dx.doi.org/10.1007/s11042-022-13007-7 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2119 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 81 2022 22 11 04 31727-31751 |
allfieldsSound |
10.1007/s11042-022-13007-7 doi (DE-627)SPR047908521 (SPR)s11042-022-13007-7-e DE-627 ger DE-627 rakwb eng Borade, Jay Laxman verfasserin aut Multi-class object detection system using hybrid convolutional neural network architecture 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 Abstract Object detection in computer vision has been a significant research area for the past decade. Identifying objects with multiple classes from an image has attracted great attention because it can effectively classify and detect the image. A multi-class object detection system from a video or image is quite challenging because of the errors obtained by the location classification process. Our proposed system generalized a hybrid convolutional neural network (H-CNN) model is used to realize the user object from an image. The proposed work integrates pre-processing, object localization, feature extraction and classification. First, the input image is pre-processed with Gaussian filtering to remove noise and improve the image quality. After completing the pre-processing procedure, it is subjected to object localization. Here the object in the image is localized using Grid Guided Localization (GGL). In the feature extraction phase, the model would be pre-trained with AlexNet. Here the AlexNet are generalized as fully connected (FC) layers. Finally, the Softmax layer in the AlexNet architecture is replaced by SVR (Support Vector Regression), which acts as a classifier for identifying the object class. The classification loss is minimized using the Improved Grey Wolf (IGW) optimization algorithm. Thus, the H-CNN model can quickly classify and label the objects from images. It also offers improved classification performance in managing effective training time. The proposed work will be implemented in PYTHON. Therefore, the model would be built using various datasets such as MIT-67, PASCAL VOC2010, MS (Microsoft)-COCO, and MSRC to effectively train and classify the object. The proposed H-CNN achieved improved results with MIT-67 (96.02%), PASCAL VOC2010 (95.04%), MSRC (97.37%), and MS COCO (94.53%). The results obtained by H-CNN proved that the excluded result of Mean Average Precision (mAP), Precision, Accuracy, Recall values and F1-Score achieved better results than with recently developed works such as YOLO-fine, EfficientDet, YOLOv4, RetinaNet, GCNet and HRNet architectures. Image processing (dpeaa)DE-He213 Object localization (dpeaa)DE-He213 Deep learning (dpeaa)DE-He213 Object recognition (dpeaa)DE-He213 Machine learning (dpeaa)DE-He213 Lakshmi, Muddana A aut Enthalten in Multimedia tools and applications Dordrecht [u.a.] : Springer Science + Business Media B.V, 1995 81(2022), 22 vom: 11. Apr., Seite 31727-31751 (DE-627)27135030X (DE-600)1479928-5 1573-7721 nnns volume:81 year:2022 number:22 day:11 month:04 pages:31727-31751 https://dx.doi.org/10.1007/s11042-022-13007-7 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2119 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 81 2022 22 11 04 31727-31751 |
language |
English |
source |
Enthalten in Multimedia tools and applications 81(2022), 22 vom: 11. Apr., Seite 31727-31751 volume:81 year:2022 number:22 day:11 month:04 pages:31727-31751 |
sourceStr |
Enthalten in Multimedia tools and applications 81(2022), 22 vom: 11. Apr., Seite 31727-31751 volume:81 year:2022 number:22 day:11 month:04 pages:31727-31751 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Image processing Object localization Deep learning Object recognition Machine learning |
isfreeaccess_bool |
false |
container_title |
Multimedia tools and applications |
authorswithroles_txt_mv |
Borade, Jay Laxman @@aut@@ Lakshmi, Muddana A @@aut@@ |
publishDateDaySort_date |
2022-04-11T00:00:00Z |
hierarchy_top_id |
27135030X |
id |
SPR047908521 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">SPR047908521</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230509110131.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">220823s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11042-022-13007-7</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR047908521</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)s11042-022-13007-7-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Borade, Jay Laxman</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Multi-class object detection system using hybrid convolutional neural network architecture</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Object detection in computer vision has been a significant research area for the past decade. Identifying objects with multiple classes from an image has attracted great attention because it can effectively classify and detect the image. A multi-class object detection system from a video or image is quite challenging because of the errors obtained by the location classification process. Our proposed system generalized a hybrid convolutional neural network (H-CNN) model is used to realize the user object from an image. The proposed work integrates pre-processing, object localization, feature extraction and classification. First, the input image is pre-processed with Gaussian filtering to remove noise and improve the image quality. After completing the pre-processing procedure, it is subjected to object localization. Here the object in the image is localized using Grid Guided Localization (GGL). In the feature extraction phase, the model would be pre-trained with AlexNet. Here the AlexNet are generalized as fully connected (FC) layers. Finally, the Softmax layer in the AlexNet architecture is replaced by SVR (Support Vector Regression), which acts as a classifier for identifying the object class. The classification loss is minimized using the Improved Grey Wolf (IGW) optimization algorithm. Thus, the H-CNN model can quickly classify and label the objects from images. It also offers improved classification performance in managing effective training time. The proposed work will be implemented in PYTHON. Therefore, the model would be built using various datasets such as MIT-67, PASCAL VOC2010, MS (Microsoft)-COCO, and MSRC to effectively train and classify the object. The proposed H-CNN achieved improved results with MIT-67 (96.02%), PASCAL VOC2010 (95.04%), MSRC (97.37%), and MS COCO (94.53%). The results obtained by H-CNN proved that the excluded result of Mean Average Precision (mAP), Precision, Accuracy, Recall values and F1-Score achieved better results than with recently developed works such as YOLO-fine, EfficientDet, YOLOv4, RetinaNet, GCNet and HRNet architectures.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Image processing</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Object localization</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Deep learning</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Object recognition</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Machine learning</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lakshmi, Muddana A</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Multimedia tools and applications</subfield><subfield code="d">Dordrecht [u.a.] : Springer Science + Business Media B.V, 1995</subfield><subfield code="g">81(2022), 22 vom: 11. Apr., Seite 31727-31751</subfield><subfield code="w">(DE-627)27135030X</subfield><subfield code="w">(DE-600)1479928-5</subfield><subfield code="x">1573-7721</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:81</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:22</subfield><subfield code="g">day:11</subfield><subfield code="g">month:04</subfield><subfield code="g">pages:31727-31751</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.1007/s11042-022-13007-7</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_138</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_281</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2031</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2039</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2093</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2107</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2119</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2188</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2446</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2472</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2548</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4246</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4328</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">81</subfield><subfield code="j">2022</subfield><subfield code="e">22</subfield><subfield code="b">11</subfield><subfield code="c">04</subfield><subfield code="h">31727-31751</subfield></datafield></record></collection>
|
author |
Borade, Jay Laxman |
spellingShingle |
Borade, Jay Laxman misc Image processing misc Object localization misc Deep learning misc Object recognition misc Machine learning Multi-class object detection system using hybrid convolutional neural network architecture |
authorStr |
Borade, Jay Laxman |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)27135030X |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut |
collection |
springer |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
1573-7721 |
topic_title |
Multi-class object detection system using hybrid convolutional neural network architecture Image processing (dpeaa)DE-He213 Object localization (dpeaa)DE-He213 Deep learning (dpeaa)DE-He213 Object recognition (dpeaa)DE-He213 Machine learning (dpeaa)DE-He213 |
topic |
misc Image processing misc Object localization misc Deep learning misc Object recognition misc Machine learning |
topic_unstemmed |
misc Image processing misc Object localization misc Deep learning misc Object recognition misc Machine learning |
topic_browse |
misc Image processing misc Object localization misc Deep learning misc Object recognition misc Machine learning |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Multimedia tools and applications |
hierarchy_parent_id |
27135030X |
hierarchy_top_title |
Multimedia tools and applications |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)27135030X (DE-600)1479928-5 |
title |
Multi-class object detection system using hybrid convolutional neural network architecture |
ctrlnum |
(DE-627)SPR047908521 (SPR)s11042-022-13007-7-e |
title_full |
Multi-class object detection system using hybrid convolutional neural network architecture |
author_sort |
Borade, Jay Laxman |
journal |
Multimedia tools and applications |
journalStr |
Multimedia tools and applications |
lang_code |
eng |
isOA_bool |
false |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
txt |
container_start_page |
31727 |
author_browse |
Borade, Jay Laxman Lakshmi, Muddana A |
container_volume |
81 |
format_se |
Elektronische Aufsätze |
author-letter |
Borade, Jay Laxman |
doi_str_mv |
10.1007/s11042-022-13007-7 |
title_sort |
multi-class object detection system using hybrid convolutional neural network architecture |
title_auth |
Multi-class object detection system using hybrid convolutional neural network architecture |
abstract |
Abstract Object detection in computer vision has been a significant research area for the past decade. Identifying objects with multiple classes from an image has attracted great attention because it can effectively classify and detect the image. A multi-class object detection system from a video or image is quite challenging because of the errors obtained by the location classification process. Our proposed system generalized a hybrid convolutional neural network (H-CNN) model is used to realize the user object from an image. The proposed work integrates pre-processing, object localization, feature extraction and classification. First, the input image is pre-processed with Gaussian filtering to remove noise and improve the image quality. After completing the pre-processing procedure, it is subjected to object localization. Here the object in the image is localized using Grid Guided Localization (GGL). In the feature extraction phase, the model would be pre-trained with AlexNet. Here the AlexNet are generalized as fully connected (FC) layers. Finally, the Softmax layer in the AlexNet architecture is replaced by SVR (Support Vector Regression), which acts as a classifier for identifying the object class. The classification loss is minimized using the Improved Grey Wolf (IGW) optimization algorithm. Thus, the H-CNN model can quickly classify and label the objects from images. It also offers improved classification performance in managing effective training time. The proposed work will be implemented in PYTHON. Therefore, the model would be built using various datasets such as MIT-67, PASCAL VOC2010, MS (Microsoft)-COCO, and MSRC to effectively train and classify the object. The proposed H-CNN achieved improved results with MIT-67 (96.02%), PASCAL VOC2010 (95.04%), MSRC (97.37%), and MS COCO (94.53%). The results obtained by H-CNN proved that the excluded result of Mean Average Precision (mAP), Precision, Accuracy, Recall values and F1-Score achieved better results than with recently developed works such as YOLO-fine, EfficientDet, YOLOv4, RetinaNet, GCNet and HRNet architectures. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 |
abstractGer |
Abstract Object detection in computer vision has been a significant research area for the past decade. Identifying objects with multiple classes from an image has attracted great attention because it can effectively classify and detect the image. A multi-class object detection system from a video or image is quite challenging because of the errors obtained by the location classification process. Our proposed system generalized a hybrid convolutional neural network (H-CNN) model is used to realize the user object from an image. The proposed work integrates pre-processing, object localization, feature extraction and classification. First, the input image is pre-processed with Gaussian filtering to remove noise and improve the image quality. After completing the pre-processing procedure, it is subjected to object localization. Here the object in the image is localized using Grid Guided Localization (GGL). In the feature extraction phase, the model would be pre-trained with AlexNet. Here the AlexNet are generalized as fully connected (FC) layers. Finally, the Softmax layer in the AlexNet architecture is replaced by SVR (Support Vector Regression), which acts as a classifier for identifying the object class. The classification loss is minimized using the Improved Grey Wolf (IGW) optimization algorithm. Thus, the H-CNN model can quickly classify and label the objects from images. It also offers improved classification performance in managing effective training time. The proposed work will be implemented in PYTHON. Therefore, the model would be built using various datasets such as MIT-67, PASCAL VOC2010, MS (Microsoft)-COCO, and MSRC to effectively train and classify the object. The proposed H-CNN achieved improved results with MIT-67 (96.02%), PASCAL VOC2010 (95.04%), MSRC (97.37%), and MS COCO (94.53%). The results obtained by H-CNN proved that the excluded result of Mean Average Precision (mAP), Precision, Accuracy, Recall values and F1-Score achieved better results than with recently developed works such as YOLO-fine, EfficientDet, YOLOv4, RetinaNet, GCNet and HRNet architectures. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 |
abstract_unstemmed |
Abstract Object detection in computer vision has been a significant research area for the past decade. Identifying objects with multiple classes from an image has attracted great attention because it can effectively classify and detect the image. A multi-class object detection system from a video or image is quite challenging because of the errors obtained by the location classification process. Our proposed system generalized a hybrid convolutional neural network (H-CNN) model is used to realize the user object from an image. The proposed work integrates pre-processing, object localization, feature extraction and classification. First, the input image is pre-processed with Gaussian filtering to remove noise and improve the image quality. After completing the pre-processing procedure, it is subjected to object localization. Here the object in the image is localized using Grid Guided Localization (GGL). In the feature extraction phase, the model would be pre-trained with AlexNet. Here the AlexNet are generalized as fully connected (FC) layers. Finally, the Softmax layer in the AlexNet architecture is replaced by SVR (Support Vector Regression), which acts as a classifier for identifying the object class. The classification loss is minimized using the Improved Grey Wolf (IGW) optimization algorithm. Thus, the H-CNN model can quickly classify and label the objects from images. It also offers improved classification performance in managing effective training time. The proposed work will be implemented in PYTHON. Therefore, the model would be built using various datasets such as MIT-67, PASCAL VOC2010, MS (Microsoft)-COCO, and MSRC to effectively train and classify the object. The proposed H-CNN achieved improved results with MIT-67 (96.02%), PASCAL VOC2010 (95.04%), MSRC (97.37%), and MS COCO (94.53%). The results obtained by H-CNN proved that the excluded result of Mean Average Precision (mAP), Precision, Accuracy, Recall values and F1-Score achieved better results than with recently developed works such as YOLO-fine, EfficientDet, YOLOv4, RetinaNet, GCNet and HRNet architectures. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2119 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 |
container_issue |
22 |
title_short |
Multi-class object detection system using hybrid convolutional neural network architecture |
url |
https://dx.doi.org/10.1007/s11042-022-13007-7 |
remote_bool |
true |
author2 |
Lakshmi, Muddana A |
author2Str |
Lakshmi, Muddana A |
ppnlink |
27135030X |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s11042-022-13007-7 |
up_date |
2024-07-03T15:47:31.383Z |
_version_ |
1803573423118483456 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">SPR047908521</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230509110131.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">220823s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11042-022-13007-7</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR047908521</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)s11042-022-13007-7-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Borade, Jay Laxman</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Multi-class object detection system using hybrid convolutional neural network architecture</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Object detection in computer vision has been a significant research area for the past decade. Identifying objects with multiple classes from an image has attracted great attention because it can effectively classify and detect the image. A multi-class object detection system from a video or image is quite challenging because of the errors obtained by the location classification process. Our proposed system generalized a hybrid convolutional neural network (H-CNN) model is used to realize the user object from an image. The proposed work integrates pre-processing, object localization, feature extraction and classification. First, the input image is pre-processed with Gaussian filtering to remove noise and improve the image quality. After completing the pre-processing procedure, it is subjected to object localization. Here the object in the image is localized using Grid Guided Localization (GGL). In the feature extraction phase, the model would be pre-trained with AlexNet. Here the AlexNet are generalized as fully connected (FC) layers. Finally, the Softmax layer in the AlexNet architecture is replaced by SVR (Support Vector Regression), which acts as a classifier for identifying the object class. The classification loss is minimized using the Improved Grey Wolf (IGW) optimization algorithm. Thus, the H-CNN model can quickly classify and label the objects from images. It also offers improved classification performance in managing effective training time. The proposed work will be implemented in PYTHON. Therefore, the model would be built using various datasets such as MIT-67, PASCAL VOC2010, MS (Microsoft)-COCO, and MSRC to effectively train and classify the object. The proposed H-CNN achieved improved results with MIT-67 (96.02%), PASCAL VOC2010 (95.04%), MSRC (97.37%), and MS COCO (94.53%). The results obtained by H-CNN proved that the excluded result of Mean Average Precision (mAP), Precision, Accuracy, Recall values and F1-Score achieved better results than with recently developed works such as YOLO-fine, EfficientDet, YOLOv4, RetinaNet, GCNet and HRNet architectures.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Image processing</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Object localization</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Deep learning</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Object recognition</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Machine learning</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lakshmi, Muddana A</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Multimedia tools and applications</subfield><subfield code="d">Dordrecht [u.a.] : Springer Science + Business Media B.V, 1995</subfield><subfield code="g">81(2022), 22 vom: 11. Apr., Seite 31727-31751</subfield><subfield code="w">(DE-627)27135030X</subfield><subfield code="w">(DE-600)1479928-5</subfield><subfield code="x">1573-7721</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:81</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:22</subfield><subfield code="g">day:11</subfield><subfield code="g">month:04</subfield><subfield code="g">pages:31727-31751</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.1007/s11042-022-13007-7</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_138</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_281</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2031</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2039</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2093</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2107</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2119</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2188</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2446</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2472</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2548</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4246</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4328</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">81</subfield><subfield code="j">2022</subfield><subfield code="e">22</subfield><subfield code="b">11</subfield><subfield code="c">04</subfield><subfield code="h">31727-31751</subfield></datafield></record></collection>
|
score |
7.3996897 |