TabCtNet: Target-aware bilateral CNN-transformer network for single object tracking in satellite videos
Satellite video object tracking has become an emerging technology for dynamically observing the earth, providing the possibility for tracking moving objects in a short time. Deep learning methods such as CNN-based trackers and transformer-based trackers have been widely applied for single object tra...
Ausführliche Beschreibung
Autor*in: |
Qiqi Zhu [verfasserIn] Xin Huang [verfasserIn] Qingfeng Guan [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2024 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
In: International Journal of Applied Earth Observations and Geoinformation - Elsevier, 2022, 128(2024), Seite 103723- |
---|---|
Übergeordnetes Werk: |
volume:128 ; year:2024 ; pages:103723- |
Links: |
---|
DOI / URN: |
10.1016/j.jag.2024.103723 |
---|
Katalog-ID: |
DOAJ091688493 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ091688493 | ||
003 | DE-627 | ||
005 | 20240413235832.0 | ||
007 | cr uuu---uuuuu | ||
008 | 240412s2024 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.jag.2024.103723 |2 doi | |
035 | |a (DE-627)DOAJ091688493 | ||
035 | |a (DE-599)DOAJa21c4d2d270846d48eea6d98b1491db4 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a GB3-5030 | |
050 | 0 | |a GE1-350 | |
100 | 0 | |a Qiqi Zhu |e verfasserin |4 aut | |
245 | 1 | 0 | |a TabCtNet: Target-aware bilateral CNN-transformer network for single object tracking in satellite videos |
264 | 1 | |c 2024 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Satellite video object tracking has become an emerging technology for dynamically observing the earth, providing the possibility for tracking moving objects in a short time. Deep learning methods such as CNN-based trackers and transformer-based trackers have been widely applied for single object tracking in natural videos. The target in natural videos is captured by ground sensors, whereas satellite sensors come from high altitudes of hundreds of kilometers or more, the trackers designed for natural videos may suffer the influence of complex background, especially small targets with weak features in view of remote sensing platforms. Furtherly, the confusion of visually similar objects with the target and the deformation of target in satellite videos can also lead to incorrect positioning. To address these problems, we proposed a target-aware bilateral CNN-Transformer network (TabCtNet). In TabCtNet, the bilateral CNN-Transformer architecture with the aggregation and interaction of local spatial information and global temporal context is designed to tackle the challenge of small target with weak features in complex and clutter background in satellite videos. To effectively reduce the impact of similar objects, the target-aware block-erasing strategy (TAS) is constructed to generate weakened heatmaps from the template target mask in a data-driven manner. Moreover, a pixel-wise refinement module with corner-based box estimation (PE) is designed to extract more fine-grained spatial information for more accurate box estimation and reduce the effect of target deformation. Experimental results show that TabCtNet quantitatively and qualitatively outperforms advanced single object tracking methods on two different satellite video datasets with four categories of targets from different scenarios. Furthermore, to investigate the generalizability of the TabCtNet framework, satellite videos sourced from different countries captured by various satellite platforms were used for evaluation, and the results reveal its robust performance across various scenarios. | ||
650 | 4 | |a Target-aware | |
650 | 4 | |a Object tracking | |
650 | 4 | |a Satellite videos | |
650 | 4 | |a Weaken heatmap | |
650 | 4 | |a Pixel-wise refinement | |
650 | 4 | |a Corner-based tracker | |
653 | 0 | |a Physical geography | |
653 | 0 | |a Environmental sciences | |
700 | 0 | |a Xin Huang |e verfasserin |4 aut | |
700 | 0 | |a Qingfeng Guan |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t International Journal of Applied Earth Observations and Geoinformation |d Elsevier, 2022 |g 128(2024), Seite 103723- |w (DE-627)359784119 |w (DE-600)2097960-5 |x 1872826X |7 nnns |
773 | 1 | 8 | |g volume:128 |g year:2024 |g pages:103723- |
856 | 4 | 0 | |u https://doi.org/10.1016/j.jag.2024.103723 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/a21c4d2d270846d48eea6d98b1491db4 |z kostenfrei |
856 | 4 | 0 | |u http://www.sciencedirect.com/science/article/pii/S1569843224000773 |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/1569-8432 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2008 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 128 |j 2024 |h 103723- |
author_variant |
q z qz x h xh q g qg |
---|---|
matchkey_str |
article:1872826X:2024----::acntagtwrbltrlntasomrewrfrigebetr |
hierarchy_sort_str |
2024 |
callnumber-subject-code |
GB |
publishDate |
2024 |
allfields |
10.1016/j.jag.2024.103723 doi (DE-627)DOAJ091688493 (DE-599)DOAJa21c4d2d270846d48eea6d98b1491db4 DE-627 ger DE-627 rakwb eng GB3-5030 GE1-350 Qiqi Zhu verfasserin aut TabCtNet: Target-aware bilateral CNN-transformer network for single object tracking in satellite videos 2024 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Satellite video object tracking has become an emerging technology for dynamically observing the earth, providing the possibility for tracking moving objects in a short time. Deep learning methods such as CNN-based trackers and transformer-based trackers have been widely applied for single object tracking in natural videos. The target in natural videos is captured by ground sensors, whereas satellite sensors come from high altitudes of hundreds of kilometers or more, the trackers designed for natural videos may suffer the influence of complex background, especially small targets with weak features in view of remote sensing platforms. Furtherly, the confusion of visually similar objects with the target and the deformation of target in satellite videos can also lead to incorrect positioning. To address these problems, we proposed a target-aware bilateral CNN-Transformer network (TabCtNet). In TabCtNet, the bilateral CNN-Transformer architecture with the aggregation and interaction of local spatial information and global temporal context is designed to tackle the challenge of small target with weak features in complex and clutter background in satellite videos. To effectively reduce the impact of similar objects, the target-aware block-erasing strategy (TAS) is constructed to generate weakened heatmaps from the template target mask in a data-driven manner. Moreover, a pixel-wise refinement module with corner-based box estimation (PE) is designed to extract more fine-grained spatial information for more accurate box estimation and reduce the effect of target deformation. Experimental results show that TabCtNet quantitatively and qualitatively outperforms advanced single object tracking methods on two different satellite video datasets with four categories of targets from different scenarios. Furthermore, to investigate the generalizability of the TabCtNet framework, satellite videos sourced from different countries captured by various satellite platforms were used for evaluation, and the results reveal its robust performance across various scenarios. Target-aware Object tracking Satellite videos Weaken heatmap Pixel-wise refinement Corner-based tracker Physical geography Environmental sciences Xin Huang verfasserin aut Qingfeng Guan verfasserin aut In International Journal of Applied Earth Observations and Geoinformation Elsevier, 2022 128(2024), Seite 103723- (DE-627)359784119 (DE-600)2097960-5 1872826X nnns volume:128 year:2024 pages:103723- https://doi.org/10.1016/j.jag.2024.103723 kostenfrei https://doaj.org/article/a21c4d2d270846d48eea6d98b1491db4 kostenfrei http://www.sciencedirect.com/science/article/pii/S1569843224000773 kostenfrei https://doaj.org/toc/1569-8432 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2008 GBV_ILN_2014 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 128 2024 103723- |
spelling |
10.1016/j.jag.2024.103723 doi (DE-627)DOAJ091688493 (DE-599)DOAJa21c4d2d270846d48eea6d98b1491db4 DE-627 ger DE-627 rakwb eng GB3-5030 GE1-350 Qiqi Zhu verfasserin aut TabCtNet: Target-aware bilateral CNN-transformer network for single object tracking in satellite videos 2024 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Satellite video object tracking has become an emerging technology for dynamically observing the earth, providing the possibility for tracking moving objects in a short time. Deep learning methods such as CNN-based trackers and transformer-based trackers have been widely applied for single object tracking in natural videos. The target in natural videos is captured by ground sensors, whereas satellite sensors come from high altitudes of hundreds of kilometers or more, the trackers designed for natural videos may suffer the influence of complex background, especially small targets with weak features in view of remote sensing platforms. Furtherly, the confusion of visually similar objects with the target and the deformation of target in satellite videos can also lead to incorrect positioning. To address these problems, we proposed a target-aware bilateral CNN-Transformer network (TabCtNet). In TabCtNet, the bilateral CNN-Transformer architecture with the aggregation and interaction of local spatial information and global temporal context is designed to tackle the challenge of small target with weak features in complex and clutter background in satellite videos. To effectively reduce the impact of similar objects, the target-aware block-erasing strategy (TAS) is constructed to generate weakened heatmaps from the template target mask in a data-driven manner. Moreover, a pixel-wise refinement module with corner-based box estimation (PE) is designed to extract more fine-grained spatial information for more accurate box estimation and reduce the effect of target deformation. Experimental results show that TabCtNet quantitatively and qualitatively outperforms advanced single object tracking methods on two different satellite video datasets with four categories of targets from different scenarios. Furthermore, to investigate the generalizability of the TabCtNet framework, satellite videos sourced from different countries captured by various satellite platforms were used for evaluation, and the results reveal its robust performance across various scenarios. Target-aware Object tracking Satellite videos Weaken heatmap Pixel-wise refinement Corner-based tracker Physical geography Environmental sciences Xin Huang verfasserin aut Qingfeng Guan verfasserin aut In International Journal of Applied Earth Observations and Geoinformation Elsevier, 2022 128(2024), Seite 103723- (DE-627)359784119 (DE-600)2097960-5 1872826X nnns volume:128 year:2024 pages:103723- https://doi.org/10.1016/j.jag.2024.103723 kostenfrei https://doaj.org/article/a21c4d2d270846d48eea6d98b1491db4 kostenfrei http://www.sciencedirect.com/science/article/pii/S1569843224000773 kostenfrei https://doaj.org/toc/1569-8432 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2008 GBV_ILN_2014 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 128 2024 103723- |
allfields_unstemmed |
10.1016/j.jag.2024.103723 doi (DE-627)DOAJ091688493 (DE-599)DOAJa21c4d2d270846d48eea6d98b1491db4 DE-627 ger DE-627 rakwb eng GB3-5030 GE1-350 Qiqi Zhu verfasserin aut TabCtNet: Target-aware bilateral CNN-transformer network for single object tracking in satellite videos 2024 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Satellite video object tracking has become an emerging technology for dynamically observing the earth, providing the possibility for tracking moving objects in a short time. Deep learning methods such as CNN-based trackers and transformer-based trackers have been widely applied for single object tracking in natural videos. The target in natural videos is captured by ground sensors, whereas satellite sensors come from high altitudes of hundreds of kilometers or more, the trackers designed for natural videos may suffer the influence of complex background, especially small targets with weak features in view of remote sensing platforms. Furtherly, the confusion of visually similar objects with the target and the deformation of target in satellite videos can also lead to incorrect positioning. To address these problems, we proposed a target-aware bilateral CNN-Transformer network (TabCtNet). In TabCtNet, the bilateral CNN-Transformer architecture with the aggregation and interaction of local spatial information and global temporal context is designed to tackle the challenge of small target with weak features in complex and clutter background in satellite videos. To effectively reduce the impact of similar objects, the target-aware block-erasing strategy (TAS) is constructed to generate weakened heatmaps from the template target mask in a data-driven manner. Moreover, a pixel-wise refinement module with corner-based box estimation (PE) is designed to extract more fine-grained spatial information for more accurate box estimation and reduce the effect of target deformation. Experimental results show that TabCtNet quantitatively and qualitatively outperforms advanced single object tracking methods on two different satellite video datasets with four categories of targets from different scenarios. Furthermore, to investigate the generalizability of the TabCtNet framework, satellite videos sourced from different countries captured by various satellite platforms were used for evaluation, and the results reveal its robust performance across various scenarios. Target-aware Object tracking Satellite videos Weaken heatmap Pixel-wise refinement Corner-based tracker Physical geography Environmental sciences Xin Huang verfasserin aut Qingfeng Guan verfasserin aut In International Journal of Applied Earth Observations and Geoinformation Elsevier, 2022 128(2024), Seite 103723- (DE-627)359784119 (DE-600)2097960-5 1872826X nnns volume:128 year:2024 pages:103723- https://doi.org/10.1016/j.jag.2024.103723 kostenfrei https://doaj.org/article/a21c4d2d270846d48eea6d98b1491db4 kostenfrei http://www.sciencedirect.com/science/article/pii/S1569843224000773 kostenfrei https://doaj.org/toc/1569-8432 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2008 GBV_ILN_2014 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 128 2024 103723- |
allfieldsGer |
10.1016/j.jag.2024.103723 doi (DE-627)DOAJ091688493 (DE-599)DOAJa21c4d2d270846d48eea6d98b1491db4 DE-627 ger DE-627 rakwb eng GB3-5030 GE1-350 Qiqi Zhu verfasserin aut TabCtNet: Target-aware bilateral CNN-transformer network for single object tracking in satellite videos 2024 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Satellite video object tracking has become an emerging technology for dynamically observing the earth, providing the possibility for tracking moving objects in a short time. Deep learning methods such as CNN-based trackers and transformer-based trackers have been widely applied for single object tracking in natural videos. The target in natural videos is captured by ground sensors, whereas satellite sensors come from high altitudes of hundreds of kilometers or more, the trackers designed for natural videos may suffer the influence of complex background, especially small targets with weak features in view of remote sensing platforms. Furtherly, the confusion of visually similar objects with the target and the deformation of target in satellite videos can also lead to incorrect positioning. To address these problems, we proposed a target-aware bilateral CNN-Transformer network (TabCtNet). In TabCtNet, the bilateral CNN-Transformer architecture with the aggregation and interaction of local spatial information and global temporal context is designed to tackle the challenge of small target with weak features in complex and clutter background in satellite videos. To effectively reduce the impact of similar objects, the target-aware block-erasing strategy (TAS) is constructed to generate weakened heatmaps from the template target mask in a data-driven manner. Moreover, a pixel-wise refinement module with corner-based box estimation (PE) is designed to extract more fine-grained spatial information for more accurate box estimation and reduce the effect of target deformation. Experimental results show that TabCtNet quantitatively and qualitatively outperforms advanced single object tracking methods on two different satellite video datasets with four categories of targets from different scenarios. Furthermore, to investigate the generalizability of the TabCtNet framework, satellite videos sourced from different countries captured by various satellite platforms were used for evaluation, and the results reveal its robust performance across various scenarios. Target-aware Object tracking Satellite videos Weaken heatmap Pixel-wise refinement Corner-based tracker Physical geography Environmental sciences Xin Huang verfasserin aut Qingfeng Guan verfasserin aut In International Journal of Applied Earth Observations and Geoinformation Elsevier, 2022 128(2024), Seite 103723- (DE-627)359784119 (DE-600)2097960-5 1872826X nnns volume:128 year:2024 pages:103723- https://doi.org/10.1016/j.jag.2024.103723 kostenfrei https://doaj.org/article/a21c4d2d270846d48eea6d98b1491db4 kostenfrei http://www.sciencedirect.com/science/article/pii/S1569843224000773 kostenfrei https://doaj.org/toc/1569-8432 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2008 GBV_ILN_2014 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 128 2024 103723- |
allfieldsSound |
10.1016/j.jag.2024.103723 doi (DE-627)DOAJ091688493 (DE-599)DOAJa21c4d2d270846d48eea6d98b1491db4 DE-627 ger DE-627 rakwb eng GB3-5030 GE1-350 Qiqi Zhu verfasserin aut TabCtNet: Target-aware bilateral CNN-transformer network for single object tracking in satellite videos 2024 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Satellite video object tracking has become an emerging technology for dynamically observing the earth, providing the possibility for tracking moving objects in a short time. Deep learning methods such as CNN-based trackers and transformer-based trackers have been widely applied for single object tracking in natural videos. The target in natural videos is captured by ground sensors, whereas satellite sensors come from high altitudes of hundreds of kilometers or more, the trackers designed for natural videos may suffer the influence of complex background, especially small targets with weak features in view of remote sensing platforms. Furtherly, the confusion of visually similar objects with the target and the deformation of target in satellite videos can also lead to incorrect positioning. To address these problems, we proposed a target-aware bilateral CNN-Transformer network (TabCtNet). In TabCtNet, the bilateral CNN-Transformer architecture with the aggregation and interaction of local spatial information and global temporal context is designed to tackle the challenge of small target with weak features in complex and clutter background in satellite videos. To effectively reduce the impact of similar objects, the target-aware block-erasing strategy (TAS) is constructed to generate weakened heatmaps from the template target mask in a data-driven manner. Moreover, a pixel-wise refinement module with corner-based box estimation (PE) is designed to extract more fine-grained spatial information for more accurate box estimation and reduce the effect of target deformation. Experimental results show that TabCtNet quantitatively and qualitatively outperforms advanced single object tracking methods on two different satellite video datasets with four categories of targets from different scenarios. Furthermore, to investigate the generalizability of the TabCtNet framework, satellite videos sourced from different countries captured by various satellite platforms were used for evaluation, and the results reveal its robust performance across various scenarios. Target-aware Object tracking Satellite videos Weaken heatmap Pixel-wise refinement Corner-based tracker Physical geography Environmental sciences Xin Huang verfasserin aut Qingfeng Guan verfasserin aut In International Journal of Applied Earth Observations and Geoinformation Elsevier, 2022 128(2024), Seite 103723- (DE-627)359784119 (DE-600)2097960-5 1872826X nnns volume:128 year:2024 pages:103723- https://doi.org/10.1016/j.jag.2024.103723 kostenfrei https://doaj.org/article/a21c4d2d270846d48eea6d98b1491db4 kostenfrei http://www.sciencedirect.com/science/article/pii/S1569843224000773 kostenfrei https://doaj.org/toc/1569-8432 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2008 GBV_ILN_2014 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 128 2024 103723- |
language |
English |
source |
In International Journal of Applied Earth Observations and Geoinformation 128(2024), Seite 103723- volume:128 year:2024 pages:103723- |
sourceStr |
In International Journal of Applied Earth Observations and Geoinformation 128(2024), Seite 103723- volume:128 year:2024 pages:103723- |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Target-aware Object tracking Satellite videos Weaken heatmap Pixel-wise refinement Corner-based tracker Physical geography Environmental sciences |
isfreeaccess_bool |
true |
container_title |
International Journal of Applied Earth Observations and Geoinformation |
authorswithroles_txt_mv |
Qiqi Zhu @@aut@@ Xin Huang @@aut@@ Qingfeng Guan @@aut@@ |
publishDateDaySort_date |
2024-01-01T00:00:00Z |
hierarchy_top_id |
359784119 |
id |
DOAJ091688493 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ091688493</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240413235832.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">240412s2024 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.jag.2024.103723</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ091688493</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJa21c4d2d270846d48eea6d98b1491db4</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">GB3-5030</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">GE1-350</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Qiqi Zhu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">TabCtNet: Target-aware bilateral CNN-transformer network for single object tracking in satellite videos</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2024</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Satellite video object tracking has become an emerging technology for dynamically observing the earth, providing the possibility for tracking moving objects in a short time. Deep learning methods such as CNN-based trackers and transformer-based trackers have been widely applied for single object tracking in natural videos. The target in natural videos is captured by ground sensors, whereas satellite sensors come from high altitudes of hundreds of kilometers or more, the trackers designed for natural videos may suffer the influence of complex background, especially small targets with weak features in view of remote sensing platforms. Furtherly, the confusion of visually similar objects with the target and the deformation of target in satellite videos can also lead to incorrect positioning. To address these problems, we proposed a target-aware bilateral CNN-Transformer network (TabCtNet). In TabCtNet, the bilateral CNN-Transformer architecture with the aggregation and interaction of local spatial information and global temporal context is designed to tackle the challenge of small target with weak features in complex and clutter background in satellite videos. To effectively reduce the impact of similar objects, the target-aware block-erasing strategy (TAS) is constructed to generate weakened heatmaps from the template target mask in a data-driven manner. Moreover, a pixel-wise refinement module with corner-based box estimation (PE) is designed to extract more fine-grained spatial information for more accurate box estimation and reduce the effect of target deformation. Experimental results show that TabCtNet quantitatively and qualitatively outperforms advanced single object tracking methods on two different satellite video datasets with four categories of targets from different scenarios. Furthermore, to investigate the generalizability of the TabCtNet framework, satellite videos sourced from different countries captured by various satellite platforms were used for evaluation, and the results reveal its robust performance across various scenarios.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Target-aware</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Object tracking</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Satellite videos</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Weaken heatmap</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Pixel-wise refinement</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Corner-based tracker</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Physical geography</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Environmental sciences</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xin Huang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Qingfeng Guan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">International Journal of Applied Earth Observations and Geoinformation</subfield><subfield code="d">Elsevier, 2022</subfield><subfield code="g">128(2024), Seite 103723-</subfield><subfield code="w">(DE-627)359784119</subfield><subfield code="w">(DE-600)2097960-5</subfield><subfield code="x">1872826X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:128</subfield><subfield code="g">year:2024</subfield><subfield code="g">pages:103723-</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.jag.2024.103723</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/a21c4d2d270846d48eea6d98b1491db4</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">http://www.sciencedirect.com/science/article/pii/S1569843224000773</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1569-8432</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">128</subfield><subfield code="j">2024</subfield><subfield code="h">103723-</subfield></datafield></record></collection>
|
callnumber-first |
G - Geography, Anthropology, Recreation |
author |
Qiqi Zhu |
spellingShingle |
Qiqi Zhu misc GB3-5030 misc GE1-350 misc Target-aware misc Object tracking misc Satellite videos misc Weaken heatmap misc Pixel-wise refinement misc Corner-based tracker misc Physical geography misc Environmental sciences TabCtNet: Target-aware bilateral CNN-transformer network for single object tracking in satellite videos |
authorStr |
Qiqi Zhu |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)359784119 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
GB3-5030 |
illustrated |
Not Illustrated |
issn |
1872826X |
topic_title |
GB3-5030 GE1-350 TabCtNet: Target-aware bilateral CNN-transformer network for single object tracking in satellite videos Target-aware Object tracking Satellite videos Weaken heatmap Pixel-wise refinement Corner-based tracker |
topic |
misc GB3-5030 misc GE1-350 misc Target-aware misc Object tracking misc Satellite videos misc Weaken heatmap misc Pixel-wise refinement misc Corner-based tracker misc Physical geography misc Environmental sciences |
topic_unstemmed |
misc GB3-5030 misc GE1-350 misc Target-aware misc Object tracking misc Satellite videos misc Weaken heatmap misc Pixel-wise refinement misc Corner-based tracker misc Physical geography misc Environmental sciences |
topic_browse |
misc GB3-5030 misc GE1-350 misc Target-aware misc Object tracking misc Satellite videos misc Weaken heatmap misc Pixel-wise refinement misc Corner-based tracker misc Physical geography misc Environmental sciences |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
International Journal of Applied Earth Observations and Geoinformation |
hierarchy_parent_id |
359784119 |
hierarchy_top_title |
International Journal of Applied Earth Observations and Geoinformation |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)359784119 (DE-600)2097960-5 |
title |
TabCtNet: Target-aware bilateral CNN-transformer network for single object tracking in satellite videos |
ctrlnum |
(DE-627)DOAJ091688493 (DE-599)DOAJa21c4d2d270846d48eea6d98b1491db4 |
title_full |
TabCtNet: Target-aware bilateral CNN-transformer network for single object tracking in satellite videos |
author_sort |
Qiqi Zhu |
journal |
International Journal of Applied Earth Observations and Geoinformation |
journalStr |
International Journal of Applied Earth Observations and Geoinformation |
callnumber-first-code |
G |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2024 |
contenttype_str_mv |
txt |
container_start_page |
103723 |
author_browse |
Qiqi Zhu Xin Huang Qingfeng Guan |
container_volume |
128 |
class |
GB3-5030 GE1-350 |
format_se |
Elektronische Aufsätze |
author-letter |
Qiqi Zhu |
doi_str_mv |
10.1016/j.jag.2024.103723 |
author2-role |
verfasserin |
title_sort |
tabctnet: target-aware bilateral cnn-transformer network for single object tracking in satellite videos |
callnumber |
GB3-5030 |
title_auth |
TabCtNet: Target-aware bilateral CNN-transformer network for single object tracking in satellite videos |
abstract |
Satellite video object tracking has become an emerging technology for dynamically observing the earth, providing the possibility for tracking moving objects in a short time. Deep learning methods such as CNN-based trackers and transformer-based trackers have been widely applied for single object tracking in natural videos. The target in natural videos is captured by ground sensors, whereas satellite sensors come from high altitudes of hundreds of kilometers or more, the trackers designed for natural videos may suffer the influence of complex background, especially small targets with weak features in view of remote sensing platforms. Furtherly, the confusion of visually similar objects with the target and the deformation of target in satellite videos can also lead to incorrect positioning. To address these problems, we proposed a target-aware bilateral CNN-Transformer network (TabCtNet). In TabCtNet, the bilateral CNN-Transformer architecture with the aggregation and interaction of local spatial information and global temporal context is designed to tackle the challenge of small target with weak features in complex and clutter background in satellite videos. To effectively reduce the impact of similar objects, the target-aware block-erasing strategy (TAS) is constructed to generate weakened heatmaps from the template target mask in a data-driven manner. Moreover, a pixel-wise refinement module with corner-based box estimation (PE) is designed to extract more fine-grained spatial information for more accurate box estimation and reduce the effect of target deformation. Experimental results show that TabCtNet quantitatively and qualitatively outperforms advanced single object tracking methods on two different satellite video datasets with four categories of targets from different scenarios. Furthermore, to investigate the generalizability of the TabCtNet framework, satellite videos sourced from different countries captured by various satellite platforms were used for evaluation, and the results reveal its robust performance across various scenarios. |
abstractGer |
Satellite video object tracking has become an emerging technology for dynamically observing the earth, providing the possibility for tracking moving objects in a short time. Deep learning methods such as CNN-based trackers and transformer-based trackers have been widely applied for single object tracking in natural videos. The target in natural videos is captured by ground sensors, whereas satellite sensors come from high altitudes of hundreds of kilometers or more, the trackers designed for natural videos may suffer the influence of complex background, especially small targets with weak features in view of remote sensing platforms. Furtherly, the confusion of visually similar objects with the target and the deformation of target in satellite videos can also lead to incorrect positioning. To address these problems, we proposed a target-aware bilateral CNN-Transformer network (TabCtNet). In TabCtNet, the bilateral CNN-Transformer architecture with the aggregation and interaction of local spatial information and global temporal context is designed to tackle the challenge of small target with weak features in complex and clutter background in satellite videos. To effectively reduce the impact of similar objects, the target-aware block-erasing strategy (TAS) is constructed to generate weakened heatmaps from the template target mask in a data-driven manner. Moreover, a pixel-wise refinement module with corner-based box estimation (PE) is designed to extract more fine-grained spatial information for more accurate box estimation and reduce the effect of target deformation. Experimental results show that TabCtNet quantitatively and qualitatively outperforms advanced single object tracking methods on two different satellite video datasets with four categories of targets from different scenarios. Furthermore, to investigate the generalizability of the TabCtNet framework, satellite videos sourced from different countries captured by various satellite platforms were used for evaluation, and the results reveal its robust performance across various scenarios. |
abstract_unstemmed |
Satellite video object tracking has become an emerging technology for dynamically observing the earth, providing the possibility for tracking moving objects in a short time. Deep learning methods such as CNN-based trackers and transformer-based trackers have been widely applied for single object tracking in natural videos. The target in natural videos is captured by ground sensors, whereas satellite sensors come from high altitudes of hundreds of kilometers or more, the trackers designed for natural videos may suffer the influence of complex background, especially small targets with weak features in view of remote sensing platforms. Furtherly, the confusion of visually similar objects with the target and the deformation of target in satellite videos can also lead to incorrect positioning. To address these problems, we proposed a target-aware bilateral CNN-Transformer network (TabCtNet). In TabCtNet, the bilateral CNN-Transformer architecture with the aggregation and interaction of local spatial information and global temporal context is designed to tackle the challenge of small target with weak features in complex and clutter background in satellite videos. To effectively reduce the impact of similar objects, the target-aware block-erasing strategy (TAS) is constructed to generate weakened heatmaps from the template target mask in a data-driven manner. Moreover, a pixel-wise refinement module with corner-based box estimation (PE) is designed to extract more fine-grained spatial information for more accurate box estimation and reduce the effect of target deformation. Experimental results show that TabCtNet quantitatively and qualitatively outperforms advanced single object tracking methods on two different satellite video datasets with four categories of targets from different scenarios. Furthermore, to investigate the generalizability of the TabCtNet framework, satellite videos sourced from different countries captured by various satellite platforms were used for evaluation, and the results reveal its robust performance across various scenarios. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2008 GBV_ILN_2014 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
title_short |
TabCtNet: Target-aware bilateral CNN-transformer network for single object tracking in satellite videos |
url |
https://doi.org/10.1016/j.jag.2024.103723 https://doaj.org/article/a21c4d2d270846d48eea6d98b1491db4 http://www.sciencedirect.com/science/article/pii/S1569843224000773 https://doaj.org/toc/1569-8432 |
remote_bool |
true |
author2 |
Xin Huang Qingfeng Guan |
author2Str |
Xin Huang Qingfeng Guan |
ppnlink |
359784119 |
callnumber-subject |
GB - Physical Geography |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1016/j.jag.2024.103723 |
callnumber-a |
GB3-5030 |
up_date |
2024-07-03T21:40:40.333Z |
_version_ |
1803595641348161536 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ091688493</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240413235832.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">240412s2024 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.jag.2024.103723</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ091688493</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJa21c4d2d270846d48eea6d98b1491db4</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">GB3-5030</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">GE1-350</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Qiqi Zhu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">TabCtNet: Target-aware bilateral CNN-transformer network for single object tracking in satellite videos</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2024</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Satellite video object tracking has become an emerging technology for dynamically observing the earth, providing the possibility for tracking moving objects in a short time. Deep learning methods such as CNN-based trackers and transformer-based trackers have been widely applied for single object tracking in natural videos. The target in natural videos is captured by ground sensors, whereas satellite sensors come from high altitudes of hundreds of kilometers or more, the trackers designed for natural videos may suffer the influence of complex background, especially small targets with weak features in view of remote sensing platforms. Furtherly, the confusion of visually similar objects with the target and the deformation of target in satellite videos can also lead to incorrect positioning. To address these problems, we proposed a target-aware bilateral CNN-Transformer network (TabCtNet). In TabCtNet, the bilateral CNN-Transformer architecture with the aggregation and interaction of local spatial information and global temporal context is designed to tackle the challenge of small target with weak features in complex and clutter background in satellite videos. To effectively reduce the impact of similar objects, the target-aware block-erasing strategy (TAS) is constructed to generate weakened heatmaps from the template target mask in a data-driven manner. Moreover, a pixel-wise refinement module with corner-based box estimation (PE) is designed to extract more fine-grained spatial information for more accurate box estimation and reduce the effect of target deformation. Experimental results show that TabCtNet quantitatively and qualitatively outperforms advanced single object tracking methods on two different satellite video datasets with four categories of targets from different scenarios. Furthermore, to investigate the generalizability of the TabCtNet framework, satellite videos sourced from different countries captured by various satellite platforms were used for evaluation, and the results reveal its robust performance across various scenarios.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Target-aware</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Object tracking</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Satellite videos</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Weaken heatmap</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Pixel-wise refinement</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Corner-based tracker</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Physical geography</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Environmental sciences</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xin Huang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Qingfeng Guan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">International Journal of Applied Earth Observations and Geoinformation</subfield><subfield code="d">Elsevier, 2022</subfield><subfield code="g">128(2024), Seite 103723-</subfield><subfield code="w">(DE-627)359784119</subfield><subfield code="w">(DE-600)2097960-5</subfield><subfield code="x">1872826X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:128</subfield><subfield code="g">year:2024</subfield><subfield code="g">pages:103723-</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.jag.2024.103723</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/a21c4d2d270846d48eea6d98b1491db4</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">http://www.sciencedirect.com/science/article/pii/S1569843224000773</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1569-8432</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">128</subfield><subfield code="j">2024</subfield><subfield code="h">103723-</subfield></datafield></record></collection>
|
score |
7.3991175 |