TO–YOLOX: a pure CNN tiny object detection model for remote sensing images
Remote sensing and deep learning are being widely combined in tasks such as urban planning and disaster prevention. However, due to interference occasioned by density, overlap, and coverage, the tiny object detection in remote sensing images has always been a difficult problem. Therefore, we propose...
Ausführliche Beschreibung
Autor*in: |
Zhe Chen [verfasserIn] Yuan Liang [verfasserIn] Zhengbo Yu [verfasserIn] Ke Xu [verfasserIn] Qingyun Ji [verfasserIn] Xueqi Zhang [verfasserIn] Quanping Zhang [verfasserIn] Zijia Cui [verfasserIn] Ziqiong He [verfasserIn] Ruichun Chang [verfasserIn] Zhongchang Sun [verfasserIn] Keyan Xiao [verfasserIn] Huadong Guo [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2023 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
In: International Journal of Digital Earth - Taylor & Francis Group, 2022, 16(2023), 1, Seite 3882-3904 |
---|---|
Übergeordnetes Werk: |
volume:16 ; year:2023 ; number:1 ; pages:3882-3904 |
Links: |
Link aufrufen |
---|
DOI / URN: |
10.1080/17538947.2023.2261901 |
---|
Katalog-ID: |
DOAJ098347489 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ098347489 | ||
003 | DE-627 | ||
005 | 20240413220750.0 | ||
007 | cr uuu---uuuuu | ||
008 | 240413s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1080/17538947.2023.2261901 |2 doi | |
035 | |a (DE-627)DOAJ098347489 | ||
035 | |a (DE-599)DOAJ1d36b1364e3c44bfb892403168220d8c | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a GA1-1776 | |
100 | 0 | |a Zhe Chen |e verfasserin |4 aut | |
245 | 1 | 0 | |a TO–YOLOX: a pure CNN tiny object detection model for remote sensing images |
264 | 1 | |c 2023 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Remote sensing and deep learning are being widely combined in tasks such as urban planning and disaster prevention. However, due to interference occasioned by density, overlap, and coverage, the tiny object detection in remote sensing images has always been a difficult problem. Therefore, we propose a novel TO–YOLOX(Tiny Object–You Only Look Once) model. TO–YOLOX possesses a MiSo(Multiple-in-Single-out) feature fusion structure, which exhibits a spatial-shift structure, and the model balances positive and negative samples and enhances the information interaction pertaining to the local patch of remote sensing images. TO–YOLOX utilizes an adaptive IOU-T (Intersection Over Uni-Tiny) loss to enhance the localization accuracy of tiny objects, and it applies attention mechanism Group-CBAM (group-convolutional block attention module) to enhance the perception of tiny objects in remote sensing images. To verify the effectiveness and efficiency of TO–YOLOX, we utilized three aerial-photography tiny object detection datasets, namely VisDrone2021, Tiny Person, and DOTA–HBB, and the following mean average precision (mAP) values were recorded, respectively: 45.31% (+10.03%), 28.9% (+9.36%), and 63.02% (+9.62%). With respect to recognizing tiny objects, TO–YOLOX exhibits a stronger ability compared with Faster R-CNN, RetinaNet, YOLOv5, YOLOv6, YOLOv7, and YOLOX, and the proposed model exhibits fast computation. | ||
650 | 4 | |a tiny object detection | |
650 | 4 | |a to-yolox | |
650 | 4 | |a remote sensing image | |
650 | 4 | |a deep learning | |
650 | 4 | |a attention mechanism | |
653 | 0 | |a Mathematical geography. Cartography | |
700 | 0 | |a Yuan Liang |e verfasserin |4 aut | |
700 | 0 | |a Zhengbo Yu |e verfasserin |4 aut | |
700 | 0 | |a Ke Xu |e verfasserin |4 aut | |
700 | 0 | |a Qingyun Ji |e verfasserin |4 aut | |
700 | 0 | |a Xueqi Zhang |e verfasserin |4 aut | |
700 | 0 | |a Quanping Zhang |e verfasserin |4 aut | |
700 | 0 | |a Zijia Cui |e verfasserin |4 aut | |
700 | 0 | |a Ziqiong He |e verfasserin |4 aut | |
700 | 0 | |a Ruichun Chang |e verfasserin |4 aut | |
700 | 0 | |a Zhongchang Sun |e verfasserin |4 aut | |
700 | 0 | |a Keyan Xiao |e verfasserin |4 aut | |
700 | 0 | |a Huadong Guo |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t International Journal of Digital Earth |d Taylor & Francis Group, 2022 |g 16(2023), 1, Seite 3882-3904 |w (DE-627)558695884 |w (DE-600)2410527-2 |x 17538955 |7 nnns |
773 | 1 | 8 | |g volume:16 |g year:2023 |g number:1 |g pages:3882-3904 |
856 | 4 | 0 | |u https://doi.org/10.1080/17538947.2023.2261901 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/1d36b1364e3c44bfb892403168220d8c |z kostenfrei |
856 | 4 | 0 | |u http://dx.doi.org/10.1080/17538947.2023.2261901 |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/1753-8947 |y Journal toc |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/1753-8955 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_206 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 16 |j 2023 |e 1 |h 3882-3904 |
author_variant |
z c zc y l yl z y zy k x kx q j qj x z xz q z qz z c zc z h zh r c rc z s zs k x kx h g hg |
---|---|
matchkey_str |
article:17538955:2023----::oooauentnojcdtcinoefre |
hierarchy_sort_str |
2023 |
callnumber-subject-code |
GA |
publishDate |
2023 |
allfields |
10.1080/17538947.2023.2261901 doi (DE-627)DOAJ098347489 (DE-599)DOAJ1d36b1364e3c44bfb892403168220d8c DE-627 ger DE-627 rakwb eng GA1-1776 Zhe Chen verfasserin aut TO–YOLOX: a pure CNN tiny object detection model for remote sensing images 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Remote sensing and deep learning are being widely combined in tasks such as urban planning and disaster prevention. However, due to interference occasioned by density, overlap, and coverage, the tiny object detection in remote sensing images has always been a difficult problem. Therefore, we propose a novel TO–YOLOX(Tiny Object–You Only Look Once) model. TO–YOLOX possesses a MiSo(Multiple-in-Single-out) feature fusion structure, which exhibits a spatial-shift structure, and the model balances positive and negative samples and enhances the information interaction pertaining to the local patch of remote sensing images. TO–YOLOX utilizes an adaptive IOU-T (Intersection Over Uni-Tiny) loss to enhance the localization accuracy of tiny objects, and it applies attention mechanism Group-CBAM (group-convolutional block attention module) to enhance the perception of tiny objects in remote sensing images. To verify the effectiveness and efficiency of TO–YOLOX, we utilized three aerial-photography tiny object detection datasets, namely VisDrone2021, Tiny Person, and DOTA–HBB, and the following mean average precision (mAP) values were recorded, respectively: 45.31% (+10.03%), 28.9% (+9.36%), and 63.02% (+9.62%). With respect to recognizing tiny objects, TO–YOLOX exhibits a stronger ability compared with Faster R-CNN, RetinaNet, YOLOv5, YOLOv6, YOLOv7, and YOLOX, and the proposed model exhibits fast computation. tiny object detection to-yolox remote sensing image deep learning attention mechanism Mathematical geography. Cartography Yuan Liang verfasserin aut Zhengbo Yu verfasserin aut Ke Xu verfasserin aut Qingyun Ji verfasserin aut Xueqi Zhang verfasserin aut Quanping Zhang verfasserin aut Zijia Cui verfasserin aut Ziqiong He verfasserin aut Ruichun Chang verfasserin aut Zhongchang Sun verfasserin aut Keyan Xiao verfasserin aut Huadong Guo verfasserin aut In International Journal of Digital Earth Taylor & Francis Group, 2022 16(2023), 1, Seite 3882-3904 (DE-627)558695884 (DE-600)2410527-2 17538955 nnns volume:16 year:2023 number:1 pages:3882-3904 https://doi.org/10.1080/17538947.2023.2261901 kostenfrei https://doaj.org/article/1d36b1364e3c44bfb892403168220d8c kostenfrei http://dx.doi.org/10.1080/17538947.2023.2261901 kostenfrei https://doaj.org/toc/1753-8947 Journal toc kostenfrei https://doaj.org/toc/1753-8955 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2111 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 16 2023 1 3882-3904 |
spelling |
10.1080/17538947.2023.2261901 doi (DE-627)DOAJ098347489 (DE-599)DOAJ1d36b1364e3c44bfb892403168220d8c DE-627 ger DE-627 rakwb eng GA1-1776 Zhe Chen verfasserin aut TO–YOLOX: a pure CNN tiny object detection model for remote sensing images 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Remote sensing and deep learning are being widely combined in tasks such as urban planning and disaster prevention. However, due to interference occasioned by density, overlap, and coverage, the tiny object detection in remote sensing images has always been a difficult problem. Therefore, we propose a novel TO–YOLOX(Tiny Object–You Only Look Once) model. TO–YOLOX possesses a MiSo(Multiple-in-Single-out) feature fusion structure, which exhibits a spatial-shift structure, and the model balances positive and negative samples and enhances the information interaction pertaining to the local patch of remote sensing images. TO–YOLOX utilizes an adaptive IOU-T (Intersection Over Uni-Tiny) loss to enhance the localization accuracy of tiny objects, and it applies attention mechanism Group-CBAM (group-convolutional block attention module) to enhance the perception of tiny objects in remote sensing images. To verify the effectiveness and efficiency of TO–YOLOX, we utilized three aerial-photography tiny object detection datasets, namely VisDrone2021, Tiny Person, and DOTA–HBB, and the following mean average precision (mAP) values were recorded, respectively: 45.31% (+10.03%), 28.9% (+9.36%), and 63.02% (+9.62%). With respect to recognizing tiny objects, TO–YOLOX exhibits a stronger ability compared with Faster R-CNN, RetinaNet, YOLOv5, YOLOv6, YOLOv7, and YOLOX, and the proposed model exhibits fast computation. tiny object detection to-yolox remote sensing image deep learning attention mechanism Mathematical geography. Cartography Yuan Liang verfasserin aut Zhengbo Yu verfasserin aut Ke Xu verfasserin aut Qingyun Ji verfasserin aut Xueqi Zhang verfasserin aut Quanping Zhang verfasserin aut Zijia Cui verfasserin aut Ziqiong He verfasserin aut Ruichun Chang verfasserin aut Zhongchang Sun verfasserin aut Keyan Xiao verfasserin aut Huadong Guo verfasserin aut In International Journal of Digital Earth Taylor & Francis Group, 2022 16(2023), 1, Seite 3882-3904 (DE-627)558695884 (DE-600)2410527-2 17538955 nnns volume:16 year:2023 number:1 pages:3882-3904 https://doi.org/10.1080/17538947.2023.2261901 kostenfrei https://doaj.org/article/1d36b1364e3c44bfb892403168220d8c kostenfrei http://dx.doi.org/10.1080/17538947.2023.2261901 kostenfrei https://doaj.org/toc/1753-8947 Journal toc kostenfrei https://doaj.org/toc/1753-8955 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2111 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 16 2023 1 3882-3904 |
allfields_unstemmed |
10.1080/17538947.2023.2261901 doi (DE-627)DOAJ098347489 (DE-599)DOAJ1d36b1364e3c44bfb892403168220d8c DE-627 ger DE-627 rakwb eng GA1-1776 Zhe Chen verfasserin aut TO–YOLOX: a pure CNN tiny object detection model for remote sensing images 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Remote sensing and deep learning are being widely combined in tasks such as urban planning and disaster prevention. However, due to interference occasioned by density, overlap, and coverage, the tiny object detection in remote sensing images has always been a difficult problem. Therefore, we propose a novel TO–YOLOX(Tiny Object–You Only Look Once) model. TO–YOLOX possesses a MiSo(Multiple-in-Single-out) feature fusion structure, which exhibits a spatial-shift structure, and the model balances positive and negative samples and enhances the information interaction pertaining to the local patch of remote sensing images. TO–YOLOX utilizes an adaptive IOU-T (Intersection Over Uni-Tiny) loss to enhance the localization accuracy of tiny objects, and it applies attention mechanism Group-CBAM (group-convolutional block attention module) to enhance the perception of tiny objects in remote sensing images. To verify the effectiveness and efficiency of TO–YOLOX, we utilized three aerial-photography tiny object detection datasets, namely VisDrone2021, Tiny Person, and DOTA–HBB, and the following mean average precision (mAP) values were recorded, respectively: 45.31% (+10.03%), 28.9% (+9.36%), and 63.02% (+9.62%). With respect to recognizing tiny objects, TO–YOLOX exhibits a stronger ability compared with Faster R-CNN, RetinaNet, YOLOv5, YOLOv6, YOLOv7, and YOLOX, and the proposed model exhibits fast computation. tiny object detection to-yolox remote sensing image deep learning attention mechanism Mathematical geography. Cartography Yuan Liang verfasserin aut Zhengbo Yu verfasserin aut Ke Xu verfasserin aut Qingyun Ji verfasserin aut Xueqi Zhang verfasserin aut Quanping Zhang verfasserin aut Zijia Cui verfasserin aut Ziqiong He verfasserin aut Ruichun Chang verfasserin aut Zhongchang Sun verfasserin aut Keyan Xiao verfasserin aut Huadong Guo verfasserin aut In International Journal of Digital Earth Taylor & Francis Group, 2022 16(2023), 1, Seite 3882-3904 (DE-627)558695884 (DE-600)2410527-2 17538955 nnns volume:16 year:2023 number:1 pages:3882-3904 https://doi.org/10.1080/17538947.2023.2261901 kostenfrei https://doaj.org/article/1d36b1364e3c44bfb892403168220d8c kostenfrei http://dx.doi.org/10.1080/17538947.2023.2261901 kostenfrei https://doaj.org/toc/1753-8947 Journal toc kostenfrei https://doaj.org/toc/1753-8955 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2111 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 16 2023 1 3882-3904 |
allfieldsGer |
10.1080/17538947.2023.2261901 doi (DE-627)DOAJ098347489 (DE-599)DOAJ1d36b1364e3c44bfb892403168220d8c DE-627 ger DE-627 rakwb eng GA1-1776 Zhe Chen verfasserin aut TO–YOLOX: a pure CNN tiny object detection model for remote sensing images 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Remote sensing and deep learning are being widely combined in tasks such as urban planning and disaster prevention. However, due to interference occasioned by density, overlap, and coverage, the tiny object detection in remote sensing images has always been a difficult problem. Therefore, we propose a novel TO–YOLOX(Tiny Object–You Only Look Once) model. TO–YOLOX possesses a MiSo(Multiple-in-Single-out) feature fusion structure, which exhibits a spatial-shift structure, and the model balances positive and negative samples and enhances the information interaction pertaining to the local patch of remote sensing images. TO–YOLOX utilizes an adaptive IOU-T (Intersection Over Uni-Tiny) loss to enhance the localization accuracy of tiny objects, and it applies attention mechanism Group-CBAM (group-convolutional block attention module) to enhance the perception of tiny objects in remote sensing images. To verify the effectiveness and efficiency of TO–YOLOX, we utilized three aerial-photography tiny object detection datasets, namely VisDrone2021, Tiny Person, and DOTA–HBB, and the following mean average precision (mAP) values were recorded, respectively: 45.31% (+10.03%), 28.9% (+9.36%), and 63.02% (+9.62%). With respect to recognizing tiny objects, TO–YOLOX exhibits a stronger ability compared with Faster R-CNN, RetinaNet, YOLOv5, YOLOv6, YOLOv7, and YOLOX, and the proposed model exhibits fast computation. tiny object detection to-yolox remote sensing image deep learning attention mechanism Mathematical geography. Cartography Yuan Liang verfasserin aut Zhengbo Yu verfasserin aut Ke Xu verfasserin aut Qingyun Ji verfasserin aut Xueqi Zhang verfasserin aut Quanping Zhang verfasserin aut Zijia Cui verfasserin aut Ziqiong He verfasserin aut Ruichun Chang verfasserin aut Zhongchang Sun verfasserin aut Keyan Xiao verfasserin aut Huadong Guo verfasserin aut In International Journal of Digital Earth Taylor & Francis Group, 2022 16(2023), 1, Seite 3882-3904 (DE-627)558695884 (DE-600)2410527-2 17538955 nnns volume:16 year:2023 number:1 pages:3882-3904 https://doi.org/10.1080/17538947.2023.2261901 kostenfrei https://doaj.org/article/1d36b1364e3c44bfb892403168220d8c kostenfrei http://dx.doi.org/10.1080/17538947.2023.2261901 kostenfrei https://doaj.org/toc/1753-8947 Journal toc kostenfrei https://doaj.org/toc/1753-8955 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2111 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 16 2023 1 3882-3904 |
allfieldsSound |
10.1080/17538947.2023.2261901 doi (DE-627)DOAJ098347489 (DE-599)DOAJ1d36b1364e3c44bfb892403168220d8c DE-627 ger DE-627 rakwb eng GA1-1776 Zhe Chen verfasserin aut TO–YOLOX: a pure CNN tiny object detection model for remote sensing images 2023 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Remote sensing and deep learning are being widely combined in tasks such as urban planning and disaster prevention. However, due to interference occasioned by density, overlap, and coverage, the tiny object detection in remote sensing images has always been a difficult problem. Therefore, we propose a novel TO–YOLOX(Tiny Object–You Only Look Once) model. TO–YOLOX possesses a MiSo(Multiple-in-Single-out) feature fusion structure, which exhibits a spatial-shift structure, and the model balances positive and negative samples and enhances the information interaction pertaining to the local patch of remote sensing images. TO–YOLOX utilizes an adaptive IOU-T (Intersection Over Uni-Tiny) loss to enhance the localization accuracy of tiny objects, and it applies attention mechanism Group-CBAM (group-convolutional block attention module) to enhance the perception of tiny objects in remote sensing images. To verify the effectiveness and efficiency of TO–YOLOX, we utilized three aerial-photography tiny object detection datasets, namely VisDrone2021, Tiny Person, and DOTA–HBB, and the following mean average precision (mAP) values were recorded, respectively: 45.31% (+10.03%), 28.9% (+9.36%), and 63.02% (+9.62%). With respect to recognizing tiny objects, TO–YOLOX exhibits a stronger ability compared with Faster R-CNN, RetinaNet, YOLOv5, YOLOv6, YOLOv7, and YOLOX, and the proposed model exhibits fast computation. tiny object detection to-yolox remote sensing image deep learning attention mechanism Mathematical geography. Cartography Yuan Liang verfasserin aut Zhengbo Yu verfasserin aut Ke Xu verfasserin aut Qingyun Ji verfasserin aut Xueqi Zhang verfasserin aut Quanping Zhang verfasserin aut Zijia Cui verfasserin aut Ziqiong He verfasserin aut Ruichun Chang verfasserin aut Zhongchang Sun verfasserin aut Keyan Xiao verfasserin aut Huadong Guo verfasserin aut In International Journal of Digital Earth Taylor & Francis Group, 2022 16(2023), 1, Seite 3882-3904 (DE-627)558695884 (DE-600)2410527-2 17538955 nnns volume:16 year:2023 number:1 pages:3882-3904 https://doi.org/10.1080/17538947.2023.2261901 kostenfrei https://doaj.org/article/1d36b1364e3c44bfb892403168220d8c kostenfrei http://dx.doi.org/10.1080/17538947.2023.2261901 kostenfrei https://doaj.org/toc/1753-8947 Journal toc kostenfrei https://doaj.org/toc/1753-8955 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2111 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 16 2023 1 3882-3904 |
language |
English |
source |
In International Journal of Digital Earth 16(2023), 1, Seite 3882-3904 volume:16 year:2023 number:1 pages:3882-3904 |
sourceStr |
In International Journal of Digital Earth 16(2023), 1, Seite 3882-3904 volume:16 year:2023 number:1 pages:3882-3904 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
tiny object detection to-yolox remote sensing image deep learning attention mechanism Mathematical geography. Cartography |
isfreeaccess_bool |
true |
container_title |
International Journal of Digital Earth |
authorswithroles_txt_mv |
Zhe Chen @@aut@@ Yuan Liang @@aut@@ Zhengbo Yu @@aut@@ Ke Xu @@aut@@ Qingyun Ji @@aut@@ Xueqi Zhang @@aut@@ Quanping Zhang @@aut@@ Zijia Cui @@aut@@ Ziqiong He @@aut@@ Ruichun Chang @@aut@@ Zhongchang Sun @@aut@@ Keyan Xiao @@aut@@ Huadong Guo @@aut@@ |
publishDateDaySort_date |
2023-01-01T00:00:00Z |
hierarchy_top_id |
558695884 |
id |
DOAJ098347489 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">DOAJ098347489</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240413220750.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">240413s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1080/17538947.2023.2261901</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ098347489</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ1d36b1364e3c44bfb892403168220d8c</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">GA1-1776</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Zhe Chen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">TO–YOLOX: a pure CNN tiny object detection model for remote sensing images</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Remote sensing and deep learning are being widely combined in tasks such as urban planning and disaster prevention. However, due to interference occasioned by density, overlap, and coverage, the tiny object detection in remote sensing images has always been a difficult problem. Therefore, we propose a novel TO–YOLOX(Tiny Object–You Only Look Once) model. TO–YOLOX possesses a MiSo(Multiple-in-Single-out) feature fusion structure, which exhibits a spatial-shift structure, and the model balances positive and negative samples and enhances the information interaction pertaining to the local patch of remote sensing images. TO–YOLOX utilizes an adaptive IOU-T (Intersection Over Uni-Tiny) loss to enhance the localization accuracy of tiny objects, and it applies attention mechanism Group-CBAM (group-convolutional block attention module) to enhance the perception of tiny objects in remote sensing images. To verify the effectiveness and efficiency of TO–YOLOX, we utilized three aerial-photography tiny object detection datasets, namely VisDrone2021, Tiny Person, and DOTA–HBB, and the following mean average precision (mAP) values were recorded, respectively: 45.31% (+10.03%), 28.9% (+9.36%), and 63.02% (+9.62%). With respect to recognizing tiny objects, TO–YOLOX exhibits a stronger ability compared with Faster R-CNN, RetinaNet, YOLOv5, YOLOv6, YOLOv7, and YOLOX, and the proposed model exhibits fast computation.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">tiny object detection</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">to-yolox</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">remote sensing image</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">deep learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">attention mechanism</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Mathematical geography. Cartography</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yuan Liang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zhengbo Yu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Ke Xu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Qingyun Ji</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xueqi Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Quanping Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zijia Cui</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Ziqiong He</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Ruichun Chang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zhongchang Sun</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Keyan Xiao</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Huadong Guo</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">International Journal of Digital Earth</subfield><subfield code="d">Taylor & Francis Group, 2022</subfield><subfield code="g">16(2023), 1, Seite 3882-3904</subfield><subfield code="w">(DE-627)558695884</subfield><subfield code="w">(DE-600)2410527-2</subfield><subfield code="x">17538955</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:16</subfield><subfield code="g">year:2023</subfield><subfield code="g">number:1</subfield><subfield code="g">pages:3882-3904</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1080/17538947.2023.2261901</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/1d36b1364e3c44bfb892403168220d8c</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">http://dx.doi.org/10.1080/17538947.2023.2261901</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1753-8947</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1753-8955</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_206</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">16</subfield><subfield code="j">2023</subfield><subfield code="e">1</subfield><subfield code="h">3882-3904</subfield></datafield></record></collection>
|
callnumber-first |
G - Geography, Anthropology, Recreation |
author |
Zhe Chen |
spellingShingle |
Zhe Chen misc GA1-1776 misc tiny object detection misc to-yolox misc remote sensing image misc deep learning misc attention mechanism misc Mathematical geography. Cartography TO–YOLOX: a pure CNN tiny object detection model for remote sensing images |
authorStr |
Zhe Chen |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)558695884 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut aut aut aut aut aut aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
GA1-1776 |
illustrated |
Not Illustrated |
issn |
17538955 |
topic_title |
GA1-1776 TO–YOLOX: a pure CNN tiny object detection model for remote sensing images tiny object detection to-yolox remote sensing image deep learning attention mechanism |
topic |
misc GA1-1776 misc tiny object detection misc to-yolox misc remote sensing image misc deep learning misc attention mechanism misc Mathematical geography. Cartography |
topic_unstemmed |
misc GA1-1776 misc tiny object detection misc to-yolox misc remote sensing image misc deep learning misc attention mechanism misc Mathematical geography. Cartography |
topic_browse |
misc GA1-1776 misc tiny object detection misc to-yolox misc remote sensing image misc deep learning misc attention mechanism misc Mathematical geography. Cartography |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
International Journal of Digital Earth |
hierarchy_parent_id |
558695884 |
hierarchy_top_title |
International Journal of Digital Earth |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)558695884 (DE-600)2410527-2 |
title |
TO–YOLOX: a pure CNN tiny object detection model for remote sensing images |
ctrlnum |
(DE-627)DOAJ098347489 (DE-599)DOAJ1d36b1364e3c44bfb892403168220d8c |
title_full |
TO–YOLOX: a pure CNN tiny object detection model for remote sensing images |
author_sort |
Zhe Chen |
journal |
International Journal of Digital Earth |
journalStr |
International Journal of Digital Earth |
callnumber-first-code |
G |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2023 |
contenttype_str_mv |
txt |
container_start_page |
3882 |
author_browse |
Zhe Chen Yuan Liang Zhengbo Yu Ke Xu Qingyun Ji Xueqi Zhang Quanping Zhang Zijia Cui Ziqiong He Ruichun Chang Zhongchang Sun Keyan Xiao Huadong Guo |
container_volume |
16 |
class |
GA1-1776 |
format_se |
Elektronische Aufsätze |
author-letter |
Zhe Chen |
doi_str_mv |
10.1080/17538947.2023.2261901 |
author2-role |
verfasserin |
title_sort |
to–yolox: a pure cnn tiny object detection model for remote sensing images |
callnumber |
GA1-1776 |
title_auth |
TO–YOLOX: a pure CNN tiny object detection model for remote sensing images |
abstract |
Remote sensing and deep learning are being widely combined in tasks such as urban planning and disaster prevention. However, due to interference occasioned by density, overlap, and coverage, the tiny object detection in remote sensing images has always been a difficult problem. Therefore, we propose a novel TO–YOLOX(Tiny Object–You Only Look Once) model. TO–YOLOX possesses a MiSo(Multiple-in-Single-out) feature fusion structure, which exhibits a spatial-shift structure, and the model balances positive and negative samples and enhances the information interaction pertaining to the local patch of remote sensing images. TO–YOLOX utilizes an adaptive IOU-T (Intersection Over Uni-Tiny) loss to enhance the localization accuracy of tiny objects, and it applies attention mechanism Group-CBAM (group-convolutional block attention module) to enhance the perception of tiny objects in remote sensing images. To verify the effectiveness and efficiency of TO–YOLOX, we utilized three aerial-photography tiny object detection datasets, namely VisDrone2021, Tiny Person, and DOTA–HBB, and the following mean average precision (mAP) values were recorded, respectively: 45.31% (+10.03%), 28.9% (+9.36%), and 63.02% (+9.62%). With respect to recognizing tiny objects, TO–YOLOX exhibits a stronger ability compared with Faster R-CNN, RetinaNet, YOLOv5, YOLOv6, YOLOv7, and YOLOX, and the proposed model exhibits fast computation. |
abstractGer |
Remote sensing and deep learning are being widely combined in tasks such as urban planning and disaster prevention. However, due to interference occasioned by density, overlap, and coverage, the tiny object detection in remote sensing images has always been a difficult problem. Therefore, we propose a novel TO–YOLOX(Tiny Object–You Only Look Once) model. TO–YOLOX possesses a MiSo(Multiple-in-Single-out) feature fusion structure, which exhibits a spatial-shift structure, and the model balances positive and negative samples and enhances the information interaction pertaining to the local patch of remote sensing images. TO–YOLOX utilizes an adaptive IOU-T (Intersection Over Uni-Tiny) loss to enhance the localization accuracy of tiny objects, and it applies attention mechanism Group-CBAM (group-convolutional block attention module) to enhance the perception of tiny objects in remote sensing images. To verify the effectiveness and efficiency of TO–YOLOX, we utilized three aerial-photography tiny object detection datasets, namely VisDrone2021, Tiny Person, and DOTA–HBB, and the following mean average precision (mAP) values were recorded, respectively: 45.31% (+10.03%), 28.9% (+9.36%), and 63.02% (+9.62%). With respect to recognizing tiny objects, TO–YOLOX exhibits a stronger ability compared with Faster R-CNN, RetinaNet, YOLOv5, YOLOv6, YOLOv7, and YOLOX, and the proposed model exhibits fast computation. |
abstract_unstemmed |
Remote sensing and deep learning are being widely combined in tasks such as urban planning and disaster prevention. However, due to interference occasioned by density, overlap, and coverage, the tiny object detection in remote sensing images has always been a difficult problem. Therefore, we propose a novel TO–YOLOX(Tiny Object–You Only Look Once) model. TO–YOLOX possesses a MiSo(Multiple-in-Single-out) feature fusion structure, which exhibits a spatial-shift structure, and the model balances positive and negative samples and enhances the information interaction pertaining to the local patch of remote sensing images. TO–YOLOX utilizes an adaptive IOU-T (Intersection Over Uni-Tiny) loss to enhance the localization accuracy of tiny objects, and it applies attention mechanism Group-CBAM (group-convolutional block attention module) to enhance the perception of tiny objects in remote sensing images. To verify the effectiveness and efficiency of TO–YOLOX, we utilized three aerial-photography tiny object detection datasets, namely VisDrone2021, Tiny Person, and DOTA–HBB, and the following mean average precision (mAP) values were recorded, respectively: 45.31% (+10.03%), 28.9% (+9.36%), and 63.02% (+9.62%). With respect to recognizing tiny objects, TO–YOLOX exhibits a stronger ability compared with Faster R-CNN, RetinaNet, YOLOv5, YOLOv6, YOLOv7, and YOLOX, and the proposed model exhibits fast computation. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2111 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
container_issue |
1 |
title_short |
TO–YOLOX: a pure CNN tiny object detection model for remote sensing images |
url |
https://doi.org/10.1080/17538947.2023.2261901 https://doaj.org/article/1d36b1364e3c44bfb892403168220d8c http://dx.doi.org/10.1080/17538947.2023.2261901 https://doaj.org/toc/1753-8947 https://doaj.org/toc/1753-8955 |
remote_bool |
true |
author2 |
Yuan Liang Zhengbo Yu Ke Xu Qingyun Ji Xueqi Zhang Quanping Zhang Zijia Cui Ziqiong He Ruichun Chang Zhongchang Sun Keyan Xiao Huadong Guo |
author2Str |
Yuan Liang Zhengbo Yu Ke Xu Qingyun Ji Xueqi Zhang Quanping Zhang Zijia Cui Ziqiong He Ruichun Chang Zhongchang Sun Keyan Xiao Huadong Guo |
ppnlink |
558695884 |
callnumber-subject |
GA - Mathematical Geography and Cartography |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1080/17538947.2023.2261901 |
callnumber-a |
GA1-1776 |
up_date |
2024-07-03T16:46:37.600Z |
_version_ |
1803577141600714752 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">DOAJ098347489</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240413220750.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">240413s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1080/17538947.2023.2261901</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ098347489</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ1d36b1364e3c44bfb892403168220d8c</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">GA1-1776</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Zhe Chen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">TO–YOLOX: a pure CNN tiny object detection model for remote sensing images</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Remote sensing and deep learning are being widely combined in tasks such as urban planning and disaster prevention. However, due to interference occasioned by density, overlap, and coverage, the tiny object detection in remote sensing images has always been a difficult problem. Therefore, we propose a novel TO–YOLOX(Tiny Object–You Only Look Once) model. TO–YOLOX possesses a MiSo(Multiple-in-Single-out) feature fusion structure, which exhibits a spatial-shift structure, and the model balances positive and negative samples and enhances the information interaction pertaining to the local patch of remote sensing images. TO–YOLOX utilizes an adaptive IOU-T (Intersection Over Uni-Tiny) loss to enhance the localization accuracy of tiny objects, and it applies attention mechanism Group-CBAM (group-convolutional block attention module) to enhance the perception of tiny objects in remote sensing images. To verify the effectiveness and efficiency of TO–YOLOX, we utilized three aerial-photography tiny object detection datasets, namely VisDrone2021, Tiny Person, and DOTA–HBB, and the following mean average precision (mAP) values were recorded, respectively: 45.31% (+10.03%), 28.9% (+9.36%), and 63.02% (+9.62%). With respect to recognizing tiny objects, TO–YOLOX exhibits a stronger ability compared with Faster R-CNN, RetinaNet, YOLOv5, YOLOv6, YOLOv7, and YOLOX, and the proposed model exhibits fast computation.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">tiny object detection</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">to-yolox</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">remote sensing image</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">deep learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">attention mechanism</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Mathematical geography. Cartography</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yuan Liang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zhengbo Yu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Ke Xu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Qingyun Ji</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xueqi Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Quanping Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zijia Cui</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Ziqiong He</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Ruichun Chang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zhongchang Sun</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Keyan Xiao</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Huadong Guo</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">International Journal of Digital Earth</subfield><subfield code="d">Taylor & Francis Group, 2022</subfield><subfield code="g">16(2023), 1, Seite 3882-3904</subfield><subfield code="w">(DE-627)558695884</subfield><subfield code="w">(DE-600)2410527-2</subfield><subfield code="x">17538955</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:16</subfield><subfield code="g">year:2023</subfield><subfield code="g">number:1</subfield><subfield code="g">pages:3882-3904</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1080/17538947.2023.2261901</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/1d36b1364e3c44bfb892403168220d8c</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">http://dx.doi.org/10.1080/17538947.2023.2261901</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1753-8947</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1753-8955</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_206</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">16</subfield><subfield code="j">2023</subfield><subfield code="e">1</subfield><subfield code="h">3882-3904</subfield></datafield></record></collection>
|
score |
7.4005537 |