Warship formation extraction and recognition based on density‐based spatial clustering of applications with noise and improved convolutional neural network
Abstract Formation recognition is a significant focus of maritime target recognition. Automatic formation extraction and recognition facilitate autonomous decision‐making. However, few studies have explored formation extraction prior to recognition. This paper introduces a density‐based spatial clus...
Ausführliche Beschreibung
Autor*in: |
Haotian He [verfasserIn] Ling Wu [verfasserIn] Xianjun Hu [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
In: IET Radar, Sonar & Navigation - Wiley, 2021, 16(2022), 12, Seite 1912-1923 |
---|---|
Übergeordnetes Werk: |
volume:16 ; year:2022 ; number:12 ; pages:1912-1923 |
Links: |
Link aufrufen |
---|
DOI / URN: |
10.1049/rsn2.12305 |
---|
Katalog-ID: |
DOAJ028909062 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ028909062 | ||
003 | DE-627 | ||
005 | 20230307131652.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230226s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1049/rsn2.12305 |2 doi | |
035 | |a (DE-627)DOAJ028909062 | ||
035 | |a (DE-599)DOAJ93cbc1d7f4f64537a0ac378866014e1c | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a TK5101-6720 | |
100 | 0 | |a Haotian He |e verfasserin |4 aut | |
245 | 1 | 0 | |a Warship formation extraction and recognition based on density‐based spatial clustering of applications with noise and improved convolutional neural network |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Abstract Formation recognition is a significant focus of maritime target recognition. Automatic formation extraction and recognition facilitate autonomous decision‐making. However, few studies have explored formation extraction prior to recognition. This paper introduces a density‐based spatial clustering of applications with noise (DBSCAN) method based on Gaussian kernel to extract formation targets. On this basis, a depthwise separable convolutional neural network (DSCNN) method is proposed for formation recognition. A track simulation system is established to form a track dataset containing three different proportions of clutter, and the formation extraction method is examined using track dataset. Subsequently, the image dataset with eight different types of formation is formulated, on the basis of various detection errors, the DSCNN method for formation recognition is compared with several typical deep learning methods. As exposed in experimental results, the DBSCAN method based on Gaussian kernel can guarantee accurate extraction of formation targets subject to different proportions of clutter. Hence, it is greatly robust and capable of effective formation extraction. Under different radar detection errors, the formation recognition accuracy of DSCNN is 91.5%–99.5%, which achieves performance improvement by up to 12.5% compared with other deep learning methods. The combination of DBSCAN and DSCNN can well realise formation extraction and recognition with different proportions of clutter in tracks and various radar detection errors. | ||
650 | 4 | |a convolutional neural network | |
650 | 4 | |a DBSCAN | |
650 | 4 | |a deep learning | |
650 | 4 | |a formation extraction | |
650 | 4 | |a formation recognition | |
653 | 0 | |a Telecommunication | |
700 | 0 | |a Ling Wu |e verfasserin |4 aut | |
700 | 0 | |a Xianjun Hu |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t IET Radar, Sonar & Navigation |d Wiley, 2021 |g 16(2022), 12, Seite 1912-1923 |w (DE-627)521693691 |w (DE-600)2264531-7 |x 17518792 |7 nnns |
773 | 1 | 8 | |g volume:16 |g year:2022 |g number:12 |g pages:1912-1923 |
856 | 4 | 0 | |u https://doi.org/10.1049/rsn2.12305 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/93cbc1d7f4f64537a0ac378866014e1c |z kostenfrei |
856 | 4 | 0 | |u https://doi.org/10.1049/rsn2.12305 |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/1751-8784 |y Journal toc |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/1751-8792 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_120 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_171 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_636 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2006 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2037 | ||
912 | |a GBV_ILN_2038 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2057 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2068 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2108 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2118 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2144 | ||
912 | |a GBV_ILN_2147 | ||
912 | |a GBV_ILN_2148 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_2522 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4046 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4336 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 16 |j 2022 |e 12 |h 1912-1923 |
author_variant |
h h hh l w lw x h xh |
---|---|
matchkey_str |
article:17518792:2022----::asifrainxrcinnrcgiinaeodniyaesaillseigfplctosihosad |
hierarchy_sort_str |
2022 |
callnumber-subject-code |
TK |
publishDate |
2022 |
allfields |
10.1049/rsn2.12305 doi (DE-627)DOAJ028909062 (DE-599)DOAJ93cbc1d7f4f64537a0ac378866014e1c DE-627 ger DE-627 rakwb eng TK5101-6720 Haotian He verfasserin aut Warship formation extraction and recognition based on density‐based spatial clustering of applications with noise and improved convolutional neural network 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract Formation recognition is a significant focus of maritime target recognition. Automatic formation extraction and recognition facilitate autonomous decision‐making. However, few studies have explored formation extraction prior to recognition. This paper introduces a density‐based spatial clustering of applications with noise (DBSCAN) method based on Gaussian kernel to extract formation targets. On this basis, a depthwise separable convolutional neural network (DSCNN) method is proposed for formation recognition. A track simulation system is established to form a track dataset containing three different proportions of clutter, and the formation extraction method is examined using track dataset. Subsequently, the image dataset with eight different types of formation is formulated, on the basis of various detection errors, the DSCNN method for formation recognition is compared with several typical deep learning methods. As exposed in experimental results, the DBSCAN method based on Gaussian kernel can guarantee accurate extraction of formation targets subject to different proportions of clutter. Hence, it is greatly robust and capable of effective formation extraction. Under different radar detection errors, the formation recognition accuracy of DSCNN is 91.5%–99.5%, which achieves performance improvement by up to 12.5% compared with other deep learning methods. The combination of DBSCAN and DSCNN can well realise formation extraction and recognition with different proportions of clutter in tracks and various radar detection errors. convolutional neural network DBSCAN deep learning formation extraction formation recognition Telecommunication Ling Wu verfasserin aut Xianjun Hu verfasserin aut In IET Radar, Sonar & Navigation Wiley, 2021 16(2022), 12, Seite 1912-1923 (DE-627)521693691 (DE-600)2264531-7 17518792 nnns volume:16 year:2022 number:12 pages:1912-1923 https://doi.org/10.1049/rsn2.12305 kostenfrei https://doaj.org/article/93cbc1d7f4f64537a0ac378866014e1c kostenfrei https://doi.org/10.1049/rsn2.12305 kostenfrei https://doaj.org/toc/1751-8784 Journal toc kostenfrei https://doaj.org/toc/1751-8792 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 16 2022 12 1912-1923 |
spelling |
10.1049/rsn2.12305 doi (DE-627)DOAJ028909062 (DE-599)DOAJ93cbc1d7f4f64537a0ac378866014e1c DE-627 ger DE-627 rakwb eng TK5101-6720 Haotian He verfasserin aut Warship formation extraction and recognition based on density‐based spatial clustering of applications with noise and improved convolutional neural network 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract Formation recognition is a significant focus of maritime target recognition. Automatic formation extraction and recognition facilitate autonomous decision‐making. However, few studies have explored formation extraction prior to recognition. This paper introduces a density‐based spatial clustering of applications with noise (DBSCAN) method based on Gaussian kernel to extract formation targets. On this basis, a depthwise separable convolutional neural network (DSCNN) method is proposed for formation recognition. A track simulation system is established to form a track dataset containing three different proportions of clutter, and the formation extraction method is examined using track dataset. Subsequently, the image dataset with eight different types of formation is formulated, on the basis of various detection errors, the DSCNN method for formation recognition is compared with several typical deep learning methods. As exposed in experimental results, the DBSCAN method based on Gaussian kernel can guarantee accurate extraction of formation targets subject to different proportions of clutter. Hence, it is greatly robust and capable of effective formation extraction. Under different radar detection errors, the formation recognition accuracy of DSCNN is 91.5%–99.5%, which achieves performance improvement by up to 12.5% compared with other deep learning methods. The combination of DBSCAN and DSCNN can well realise formation extraction and recognition with different proportions of clutter in tracks and various radar detection errors. convolutional neural network DBSCAN deep learning formation extraction formation recognition Telecommunication Ling Wu verfasserin aut Xianjun Hu verfasserin aut In IET Radar, Sonar & Navigation Wiley, 2021 16(2022), 12, Seite 1912-1923 (DE-627)521693691 (DE-600)2264531-7 17518792 nnns volume:16 year:2022 number:12 pages:1912-1923 https://doi.org/10.1049/rsn2.12305 kostenfrei https://doaj.org/article/93cbc1d7f4f64537a0ac378866014e1c kostenfrei https://doi.org/10.1049/rsn2.12305 kostenfrei https://doaj.org/toc/1751-8784 Journal toc kostenfrei https://doaj.org/toc/1751-8792 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 16 2022 12 1912-1923 |
allfields_unstemmed |
10.1049/rsn2.12305 doi (DE-627)DOAJ028909062 (DE-599)DOAJ93cbc1d7f4f64537a0ac378866014e1c DE-627 ger DE-627 rakwb eng TK5101-6720 Haotian He verfasserin aut Warship formation extraction and recognition based on density‐based spatial clustering of applications with noise and improved convolutional neural network 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract Formation recognition is a significant focus of maritime target recognition. Automatic formation extraction and recognition facilitate autonomous decision‐making. However, few studies have explored formation extraction prior to recognition. This paper introduces a density‐based spatial clustering of applications with noise (DBSCAN) method based on Gaussian kernel to extract formation targets. On this basis, a depthwise separable convolutional neural network (DSCNN) method is proposed for formation recognition. A track simulation system is established to form a track dataset containing three different proportions of clutter, and the formation extraction method is examined using track dataset. Subsequently, the image dataset with eight different types of formation is formulated, on the basis of various detection errors, the DSCNN method for formation recognition is compared with several typical deep learning methods. As exposed in experimental results, the DBSCAN method based on Gaussian kernel can guarantee accurate extraction of formation targets subject to different proportions of clutter. Hence, it is greatly robust and capable of effective formation extraction. Under different radar detection errors, the formation recognition accuracy of DSCNN is 91.5%–99.5%, which achieves performance improvement by up to 12.5% compared with other deep learning methods. The combination of DBSCAN and DSCNN can well realise formation extraction and recognition with different proportions of clutter in tracks and various radar detection errors. convolutional neural network DBSCAN deep learning formation extraction formation recognition Telecommunication Ling Wu verfasserin aut Xianjun Hu verfasserin aut In IET Radar, Sonar & Navigation Wiley, 2021 16(2022), 12, Seite 1912-1923 (DE-627)521693691 (DE-600)2264531-7 17518792 nnns volume:16 year:2022 number:12 pages:1912-1923 https://doi.org/10.1049/rsn2.12305 kostenfrei https://doaj.org/article/93cbc1d7f4f64537a0ac378866014e1c kostenfrei https://doi.org/10.1049/rsn2.12305 kostenfrei https://doaj.org/toc/1751-8784 Journal toc kostenfrei https://doaj.org/toc/1751-8792 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 16 2022 12 1912-1923 |
allfieldsGer |
10.1049/rsn2.12305 doi (DE-627)DOAJ028909062 (DE-599)DOAJ93cbc1d7f4f64537a0ac378866014e1c DE-627 ger DE-627 rakwb eng TK5101-6720 Haotian He verfasserin aut Warship formation extraction and recognition based on density‐based spatial clustering of applications with noise and improved convolutional neural network 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract Formation recognition is a significant focus of maritime target recognition. Automatic formation extraction and recognition facilitate autonomous decision‐making. However, few studies have explored formation extraction prior to recognition. This paper introduces a density‐based spatial clustering of applications with noise (DBSCAN) method based on Gaussian kernel to extract formation targets. On this basis, a depthwise separable convolutional neural network (DSCNN) method is proposed for formation recognition. A track simulation system is established to form a track dataset containing three different proportions of clutter, and the formation extraction method is examined using track dataset. Subsequently, the image dataset with eight different types of formation is formulated, on the basis of various detection errors, the DSCNN method for formation recognition is compared with several typical deep learning methods. As exposed in experimental results, the DBSCAN method based on Gaussian kernel can guarantee accurate extraction of formation targets subject to different proportions of clutter. Hence, it is greatly robust and capable of effective formation extraction. Under different radar detection errors, the formation recognition accuracy of DSCNN is 91.5%–99.5%, which achieves performance improvement by up to 12.5% compared with other deep learning methods. The combination of DBSCAN and DSCNN can well realise formation extraction and recognition with different proportions of clutter in tracks and various radar detection errors. convolutional neural network DBSCAN deep learning formation extraction formation recognition Telecommunication Ling Wu verfasserin aut Xianjun Hu verfasserin aut In IET Radar, Sonar & Navigation Wiley, 2021 16(2022), 12, Seite 1912-1923 (DE-627)521693691 (DE-600)2264531-7 17518792 nnns volume:16 year:2022 number:12 pages:1912-1923 https://doi.org/10.1049/rsn2.12305 kostenfrei https://doaj.org/article/93cbc1d7f4f64537a0ac378866014e1c kostenfrei https://doi.org/10.1049/rsn2.12305 kostenfrei https://doaj.org/toc/1751-8784 Journal toc kostenfrei https://doaj.org/toc/1751-8792 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 16 2022 12 1912-1923 |
allfieldsSound |
10.1049/rsn2.12305 doi (DE-627)DOAJ028909062 (DE-599)DOAJ93cbc1d7f4f64537a0ac378866014e1c DE-627 ger DE-627 rakwb eng TK5101-6720 Haotian He verfasserin aut Warship formation extraction and recognition based on density‐based spatial clustering of applications with noise and improved convolutional neural network 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract Formation recognition is a significant focus of maritime target recognition. Automatic formation extraction and recognition facilitate autonomous decision‐making. However, few studies have explored formation extraction prior to recognition. This paper introduces a density‐based spatial clustering of applications with noise (DBSCAN) method based on Gaussian kernel to extract formation targets. On this basis, a depthwise separable convolutional neural network (DSCNN) method is proposed for formation recognition. A track simulation system is established to form a track dataset containing three different proportions of clutter, and the formation extraction method is examined using track dataset. Subsequently, the image dataset with eight different types of formation is formulated, on the basis of various detection errors, the DSCNN method for formation recognition is compared with several typical deep learning methods. As exposed in experimental results, the DBSCAN method based on Gaussian kernel can guarantee accurate extraction of formation targets subject to different proportions of clutter. Hence, it is greatly robust and capable of effective formation extraction. Under different radar detection errors, the formation recognition accuracy of DSCNN is 91.5%–99.5%, which achieves performance improvement by up to 12.5% compared with other deep learning methods. The combination of DBSCAN and DSCNN can well realise formation extraction and recognition with different proportions of clutter in tracks and various radar detection errors. convolutional neural network DBSCAN deep learning formation extraction formation recognition Telecommunication Ling Wu verfasserin aut Xianjun Hu verfasserin aut In IET Radar, Sonar & Navigation Wiley, 2021 16(2022), 12, Seite 1912-1923 (DE-627)521693691 (DE-600)2264531-7 17518792 nnns volume:16 year:2022 number:12 pages:1912-1923 https://doi.org/10.1049/rsn2.12305 kostenfrei https://doaj.org/article/93cbc1d7f4f64537a0ac378866014e1c kostenfrei https://doi.org/10.1049/rsn2.12305 kostenfrei https://doaj.org/toc/1751-8784 Journal toc kostenfrei https://doaj.org/toc/1751-8792 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 16 2022 12 1912-1923 |
language |
English |
source |
In IET Radar, Sonar & Navigation 16(2022), 12, Seite 1912-1923 volume:16 year:2022 number:12 pages:1912-1923 |
sourceStr |
In IET Radar, Sonar & Navigation 16(2022), 12, Seite 1912-1923 volume:16 year:2022 number:12 pages:1912-1923 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
convolutional neural network DBSCAN deep learning formation extraction formation recognition Telecommunication |
isfreeaccess_bool |
true |
container_title |
IET Radar, Sonar & Navigation |
authorswithroles_txt_mv |
Haotian He @@aut@@ Ling Wu @@aut@@ Xianjun Hu @@aut@@ |
publishDateDaySort_date |
2022-01-01T00:00:00Z |
hierarchy_top_id |
521693691 |
id |
DOAJ028909062 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ028909062</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230307131652.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230226s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1049/rsn2.12305</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ028909062</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ93cbc1d7f4f64537a0ac378866014e1c</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK5101-6720</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Haotian He</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Warship formation extraction and recognition based on density‐based spatial clustering of applications with noise and improved convolutional neural network</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Formation recognition is a significant focus of maritime target recognition. Automatic formation extraction and recognition facilitate autonomous decision‐making. However, few studies have explored formation extraction prior to recognition. This paper introduces a density‐based spatial clustering of applications with noise (DBSCAN) method based on Gaussian kernel to extract formation targets. On this basis, a depthwise separable convolutional neural network (DSCNN) method is proposed for formation recognition. A track simulation system is established to form a track dataset containing three different proportions of clutter, and the formation extraction method is examined using track dataset. Subsequently, the image dataset with eight different types of formation is formulated, on the basis of various detection errors, the DSCNN method for formation recognition is compared with several typical deep learning methods. As exposed in experimental results, the DBSCAN method based on Gaussian kernel can guarantee accurate extraction of formation targets subject to different proportions of clutter. Hence, it is greatly robust and capable of effective formation extraction. Under different radar detection errors, the formation recognition accuracy of DSCNN is 91.5%–99.5%, which achieves performance improvement by up to 12.5% compared with other deep learning methods. The combination of DBSCAN and DSCNN can well realise formation extraction and recognition with different proportions of clutter in tracks and various radar detection errors.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">convolutional neural network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">DBSCAN</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">deep learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">formation extraction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">formation recognition</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Telecommunication</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Ling Wu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xianjun Hu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IET Radar, Sonar & Navigation</subfield><subfield code="d">Wiley, 2021</subfield><subfield code="g">16(2022), 12, Seite 1912-1923</subfield><subfield code="w">(DE-627)521693691</subfield><subfield code="w">(DE-600)2264531-7</subfield><subfield code="x">17518792</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:16</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:12</subfield><subfield code="g">pages:1912-1923</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1049/rsn2.12305</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/93cbc1d7f4f64537a0ac378866014e1c</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1049/rsn2.12305</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1751-8784</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1751-8792</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">16</subfield><subfield code="j">2022</subfield><subfield code="e">12</subfield><subfield code="h">1912-1923</subfield></datafield></record></collection>
|
callnumber-first |
T - Technology |
author |
Haotian He |
spellingShingle |
Haotian He misc TK5101-6720 misc convolutional neural network misc DBSCAN misc deep learning misc formation extraction misc formation recognition misc Telecommunication Warship formation extraction and recognition based on density‐based spatial clustering of applications with noise and improved convolutional neural network |
authorStr |
Haotian He |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)521693691 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
TK5101-6720 |
illustrated |
Not Illustrated |
issn |
17518792 |
topic_title |
TK5101-6720 Warship formation extraction and recognition based on density‐based spatial clustering of applications with noise and improved convolutional neural network convolutional neural network DBSCAN deep learning formation extraction formation recognition |
topic |
misc TK5101-6720 misc convolutional neural network misc DBSCAN misc deep learning misc formation extraction misc formation recognition misc Telecommunication |
topic_unstemmed |
misc TK5101-6720 misc convolutional neural network misc DBSCAN misc deep learning misc formation extraction misc formation recognition misc Telecommunication |
topic_browse |
misc TK5101-6720 misc convolutional neural network misc DBSCAN misc deep learning misc formation extraction misc formation recognition misc Telecommunication |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
IET Radar, Sonar & Navigation |
hierarchy_parent_id |
521693691 |
hierarchy_top_title |
IET Radar, Sonar & Navigation |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)521693691 (DE-600)2264531-7 |
title |
Warship formation extraction and recognition based on density‐based spatial clustering of applications with noise and improved convolutional neural network |
ctrlnum |
(DE-627)DOAJ028909062 (DE-599)DOAJ93cbc1d7f4f64537a0ac378866014e1c |
title_full |
Warship formation extraction and recognition based on density‐based spatial clustering of applications with noise and improved convolutional neural network |
author_sort |
Haotian He |
journal |
IET Radar, Sonar & Navigation |
journalStr |
IET Radar, Sonar & Navigation |
callnumber-first-code |
T |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
txt |
container_start_page |
1912 |
author_browse |
Haotian He Ling Wu Xianjun Hu |
container_volume |
16 |
class |
TK5101-6720 |
format_se |
Elektronische Aufsätze |
author-letter |
Haotian He |
doi_str_mv |
10.1049/rsn2.12305 |
author2-role |
verfasserin |
title_sort |
warship formation extraction and recognition based on density‐based spatial clustering of applications with noise and improved convolutional neural network |
callnumber |
TK5101-6720 |
title_auth |
Warship formation extraction and recognition based on density‐based spatial clustering of applications with noise and improved convolutional neural network |
abstract |
Abstract Formation recognition is a significant focus of maritime target recognition. Automatic formation extraction and recognition facilitate autonomous decision‐making. However, few studies have explored formation extraction prior to recognition. This paper introduces a density‐based spatial clustering of applications with noise (DBSCAN) method based on Gaussian kernel to extract formation targets. On this basis, a depthwise separable convolutional neural network (DSCNN) method is proposed for formation recognition. A track simulation system is established to form a track dataset containing three different proportions of clutter, and the formation extraction method is examined using track dataset. Subsequently, the image dataset with eight different types of formation is formulated, on the basis of various detection errors, the DSCNN method for formation recognition is compared with several typical deep learning methods. As exposed in experimental results, the DBSCAN method based on Gaussian kernel can guarantee accurate extraction of formation targets subject to different proportions of clutter. Hence, it is greatly robust and capable of effective formation extraction. Under different radar detection errors, the formation recognition accuracy of DSCNN is 91.5%–99.5%, which achieves performance improvement by up to 12.5% compared with other deep learning methods. The combination of DBSCAN and DSCNN can well realise formation extraction and recognition with different proportions of clutter in tracks and various radar detection errors. |
abstractGer |
Abstract Formation recognition is a significant focus of maritime target recognition. Automatic formation extraction and recognition facilitate autonomous decision‐making. However, few studies have explored formation extraction prior to recognition. This paper introduces a density‐based spatial clustering of applications with noise (DBSCAN) method based on Gaussian kernel to extract formation targets. On this basis, a depthwise separable convolutional neural network (DSCNN) method is proposed for formation recognition. A track simulation system is established to form a track dataset containing three different proportions of clutter, and the formation extraction method is examined using track dataset. Subsequently, the image dataset with eight different types of formation is formulated, on the basis of various detection errors, the DSCNN method for formation recognition is compared with several typical deep learning methods. As exposed in experimental results, the DBSCAN method based on Gaussian kernel can guarantee accurate extraction of formation targets subject to different proportions of clutter. Hence, it is greatly robust and capable of effective formation extraction. Under different radar detection errors, the formation recognition accuracy of DSCNN is 91.5%–99.5%, which achieves performance improvement by up to 12.5% compared with other deep learning methods. The combination of DBSCAN and DSCNN can well realise formation extraction and recognition with different proportions of clutter in tracks and various radar detection errors. |
abstract_unstemmed |
Abstract Formation recognition is a significant focus of maritime target recognition. Automatic formation extraction and recognition facilitate autonomous decision‐making. However, few studies have explored formation extraction prior to recognition. This paper introduces a density‐based spatial clustering of applications with noise (DBSCAN) method based on Gaussian kernel to extract formation targets. On this basis, a depthwise separable convolutional neural network (DSCNN) method is proposed for formation recognition. A track simulation system is established to form a track dataset containing three different proportions of clutter, and the formation extraction method is examined using track dataset. Subsequently, the image dataset with eight different types of formation is formulated, on the basis of various detection errors, the DSCNN method for formation recognition is compared with several typical deep learning methods. As exposed in experimental results, the DBSCAN method based on Gaussian kernel can guarantee accurate extraction of formation targets subject to different proportions of clutter. Hence, it is greatly robust and capable of effective formation extraction. Under different radar detection errors, the formation recognition accuracy of DSCNN is 91.5%–99.5%, which achieves performance improvement by up to 12.5% compared with other deep learning methods. The combination of DBSCAN and DSCNN can well realise formation extraction and recognition with different proportions of clutter in tracks and various radar detection errors. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
container_issue |
12 |
title_short |
Warship formation extraction and recognition based on density‐based spatial clustering of applications with noise and improved convolutional neural network |
url |
https://doi.org/10.1049/rsn2.12305 https://doaj.org/article/93cbc1d7f4f64537a0ac378866014e1c https://doaj.org/toc/1751-8784 https://doaj.org/toc/1751-8792 |
remote_bool |
true |
author2 |
Ling Wu Xianjun Hu |
author2Str |
Ling Wu Xianjun Hu |
ppnlink |
521693691 |
callnumber-subject |
TK - Electrical and Nuclear Engineering |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1049/rsn2.12305 |
callnumber-a |
TK5101-6720 |
up_date |
2024-07-03T20:04:53.601Z |
_version_ |
1803589615471296512 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ028909062</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230307131652.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230226s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1049/rsn2.12305</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ028909062</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ93cbc1d7f4f64537a0ac378866014e1c</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK5101-6720</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Haotian He</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Warship formation extraction and recognition based on density‐based spatial clustering of applications with noise and improved convolutional neural network</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Formation recognition is a significant focus of maritime target recognition. Automatic formation extraction and recognition facilitate autonomous decision‐making. However, few studies have explored formation extraction prior to recognition. This paper introduces a density‐based spatial clustering of applications with noise (DBSCAN) method based on Gaussian kernel to extract formation targets. On this basis, a depthwise separable convolutional neural network (DSCNN) method is proposed for formation recognition. A track simulation system is established to form a track dataset containing three different proportions of clutter, and the formation extraction method is examined using track dataset. Subsequently, the image dataset with eight different types of formation is formulated, on the basis of various detection errors, the DSCNN method for formation recognition is compared with several typical deep learning methods. As exposed in experimental results, the DBSCAN method based on Gaussian kernel can guarantee accurate extraction of formation targets subject to different proportions of clutter. Hence, it is greatly robust and capable of effective formation extraction. Under different radar detection errors, the formation recognition accuracy of DSCNN is 91.5%–99.5%, which achieves performance improvement by up to 12.5% compared with other deep learning methods. The combination of DBSCAN and DSCNN can well realise formation extraction and recognition with different proportions of clutter in tracks and various radar detection errors.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">convolutional neural network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">DBSCAN</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">deep learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">formation extraction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">formation recognition</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Telecommunication</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Ling Wu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xianjun Hu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IET Radar, Sonar & Navigation</subfield><subfield code="d">Wiley, 2021</subfield><subfield code="g">16(2022), 12, Seite 1912-1923</subfield><subfield code="w">(DE-627)521693691</subfield><subfield code="w">(DE-600)2264531-7</subfield><subfield code="x">17518792</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:16</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:12</subfield><subfield code="g">pages:1912-1923</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1049/rsn2.12305</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/93cbc1d7f4f64537a0ac378866014e1c</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1049/rsn2.12305</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1751-8784</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1751-8792</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">16</subfield><subfield code="j">2022</subfield><subfield code="e">12</subfield><subfield code="h">1912-1923</subfield></datafield></record></collection>
|
score |
7.399618 |