Double-Channel Object Tracking With Position Deviation Suppression
The object tracking methods based on multi-domain convolutional neural network (MDNet) commonly fail to track in the case of background clutter. A novel double-channel object tracking (DCOT) is proposed to solve this problem. The discriminative correlation filter (DCF), which has strong discriminati...
Ausführliche Beschreibung
Autor*in: |
Jun Chu [verfasserIn] Xuji Tu [verfasserIn] Lu Leng [verfasserIn] Jun Miao [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2020 |
---|
Schlagwörter: |
Double-channel object tracking |
---|
Übergeordnetes Werk: |
In: IEEE Access - IEEE, 2014, 8(2020), Seite 856-866 |
---|---|
Übergeordnetes Werk: |
volume:8 ; year:2020 ; pages:856-866 |
Links: |
---|
DOI / URN: |
10.1109/ACCESS.2019.2961778 |
---|
Katalog-ID: |
DOAJ056274483 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ056274483 | ||
003 | DE-627 | ||
005 | 20230502152339.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230227s2020 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1109/ACCESS.2019.2961778 |2 doi | |
035 | |a (DE-627)DOAJ056274483 | ||
035 | |a (DE-599)DOAJ507618d70fe6451d92125a796a6d8a6c | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a TK1-9971 | |
100 | 0 | |a Jun Chu |e verfasserin |4 aut | |
245 | 1 | 0 | |a Double-Channel Object Tracking With Position Deviation Suppression |
264 | 1 | |c 2020 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a The object tracking methods based on multi-domain convolutional neural network (MDNet) commonly fail to track in the case of background clutter. A novel double-channel object tracking (DCOT) is proposed to solve this problem. The discriminative correlation filter (DCF), which has strong discriminative power of low-level features, is employed for the position deviation suppress of the samples generated from MDNet. Firstly the pre-trained deep network is used to learn and classify the target and background in the video frames. If the tracked position of the DCF is judged to be correct, we delete the target candidate samples with high position deviation from MDNet. The position deviation is measured by the distance between the tracked positions of the DCF and MDNet. Finally, MDNet and DCF are updated with a robust update strategy. The experiments are performed on OTB-100 and VOT-2016. The overlap precision and distance precision of DCOT on OTB-100 are 92.2% and 69.5%, respectively, which are higher than those of MDNet by 1.3% and 1.7%. The results of DCOT in background clutter are higher than those of SANet by 0.2% and 2.8%, respectively. DCOT is also superior to other state-of-the-art trackers on VOT-2016. | ||
650 | 4 | |a Double-channel object tracking | |
650 | 4 | |a position deviation suppression | |
650 | 4 | |a DCF | |
650 | 4 | |a MDNet | |
653 | 0 | |a Electrical engineering. Electronics. Nuclear engineering | |
700 | 0 | |a Xuji Tu |e verfasserin |4 aut | |
700 | 0 | |a Lu Leng |e verfasserin |4 aut | |
700 | 0 | |a Jun Miao |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t IEEE Access |d IEEE, 2014 |g 8(2020), Seite 856-866 |w (DE-627)728440385 |w (DE-600)2687964-5 |x 21693536 |7 nnns |
773 | 1 | 8 | |g volume:8 |g year:2020 |g pages:856-866 |
856 | 4 | 0 | |u https://doi.org/10.1109/ACCESS.2019.2961778 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/507618d70fe6451d92125a796a6d8a6c |z kostenfrei |
856 | 4 | 0 | |u https://ieeexplore.ieee.org/document/8939361/ |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/2169-3536 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a SSG-OLC-PHA | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 8 |j 2020 |h 856-866 |
author_variant |
j c jc x t xt l l ll j m jm |
---|---|
matchkey_str |
article:21693536:2020----::obehneojctaknwtpstodv |
hierarchy_sort_str |
2020 |
callnumber-subject-code |
TK |
publishDate |
2020 |
allfields |
10.1109/ACCESS.2019.2961778 doi (DE-627)DOAJ056274483 (DE-599)DOAJ507618d70fe6451d92125a796a6d8a6c DE-627 ger DE-627 rakwb eng TK1-9971 Jun Chu verfasserin aut Double-Channel Object Tracking With Position Deviation Suppression 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier The object tracking methods based on multi-domain convolutional neural network (MDNet) commonly fail to track in the case of background clutter. A novel double-channel object tracking (DCOT) is proposed to solve this problem. The discriminative correlation filter (DCF), which has strong discriminative power of low-level features, is employed for the position deviation suppress of the samples generated from MDNet. Firstly the pre-trained deep network is used to learn and classify the target and background in the video frames. If the tracked position of the DCF is judged to be correct, we delete the target candidate samples with high position deviation from MDNet. The position deviation is measured by the distance between the tracked positions of the DCF and MDNet. Finally, MDNet and DCF are updated with a robust update strategy. The experiments are performed on OTB-100 and VOT-2016. The overlap precision and distance precision of DCOT on OTB-100 are 92.2% and 69.5%, respectively, which are higher than those of MDNet by 1.3% and 1.7%. The results of DCOT in background clutter are higher than those of SANet by 0.2% and 2.8%, respectively. DCOT is also superior to other state-of-the-art trackers on VOT-2016. Double-channel object tracking position deviation suppression DCF MDNet Electrical engineering. Electronics. Nuclear engineering Xuji Tu verfasserin aut Lu Leng verfasserin aut Jun Miao verfasserin aut In IEEE Access IEEE, 2014 8(2020), Seite 856-866 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:8 year:2020 pages:856-866 https://doi.org/10.1109/ACCESS.2019.2961778 kostenfrei https://doaj.org/article/507618d70fe6451d92125a796a6d8a6c kostenfrei https://ieeexplore.ieee.org/document/8939361/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 8 2020 856-866 |
spelling |
10.1109/ACCESS.2019.2961778 doi (DE-627)DOAJ056274483 (DE-599)DOAJ507618d70fe6451d92125a796a6d8a6c DE-627 ger DE-627 rakwb eng TK1-9971 Jun Chu verfasserin aut Double-Channel Object Tracking With Position Deviation Suppression 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier The object tracking methods based on multi-domain convolutional neural network (MDNet) commonly fail to track in the case of background clutter. A novel double-channel object tracking (DCOT) is proposed to solve this problem. The discriminative correlation filter (DCF), which has strong discriminative power of low-level features, is employed for the position deviation suppress of the samples generated from MDNet. Firstly the pre-trained deep network is used to learn and classify the target and background in the video frames. If the tracked position of the DCF is judged to be correct, we delete the target candidate samples with high position deviation from MDNet. The position deviation is measured by the distance between the tracked positions of the DCF and MDNet. Finally, MDNet and DCF are updated with a robust update strategy. The experiments are performed on OTB-100 and VOT-2016. The overlap precision and distance precision of DCOT on OTB-100 are 92.2% and 69.5%, respectively, which are higher than those of MDNet by 1.3% and 1.7%. The results of DCOT in background clutter are higher than those of SANet by 0.2% and 2.8%, respectively. DCOT is also superior to other state-of-the-art trackers on VOT-2016. Double-channel object tracking position deviation suppression DCF MDNet Electrical engineering. Electronics. Nuclear engineering Xuji Tu verfasserin aut Lu Leng verfasserin aut Jun Miao verfasserin aut In IEEE Access IEEE, 2014 8(2020), Seite 856-866 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:8 year:2020 pages:856-866 https://doi.org/10.1109/ACCESS.2019.2961778 kostenfrei https://doaj.org/article/507618d70fe6451d92125a796a6d8a6c kostenfrei https://ieeexplore.ieee.org/document/8939361/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 8 2020 856-866 |
allfields_unstemmed |
10.1109/ACCESS.2019.2961778 doi (DE-627)DOAJ056274483 (DE-599)DOAJ507618d70fe6451d92125a796a6d8a6c DE-627 ger DE-627 rakwb eng TK1-9971 Jun Chu verfasserin aut Double-Channel Object Tracking With Position Deviation Suppression 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier The object tracking methods based on multi-domain convolutional neural network (MDNet) commonly fail to track in the case of background clutter. A novel double-channel object tracking (DCOT) is proposed to solve this problem. The discriminative correlation filter (DCF), which has strong discriminative power of low-level features, is employed for the position deviation suppress of the samples generated from MDNet. Firstly the pre-trained deep network is used to learn and classify the target and background in the video frames. If the tracked position of the DCF is judged to be correct, we delete the target candidate samples with high position deviation from MDNet. The position deviation is measured by the distance between the tracked positions of the DCF and MDNet. Finally, MDNet and DCF are updated with a robust update strategy. The experiments are performed on OTB-100 and VOT-2016. The overlap precision and distance precision of DCOT on OTB-100 are 92.2% and 69.5%, respectively, which are higher than those of MDNet by 1.3% and 1.7%. The results of DCOT in background clutter are higher than those of SANet by 0.2% and 2.8%, respectively. DCOT is also superior to other state-of-the-art trackers on VOT-2016. Double-channel object tracking position deviation suppression DCF MDNet Electrical engineering. Electronics. Nuclear engineering Xuji Tu verfasserin aut Lu Leng verfasserin aut Jun Miao verfasserin aut In IEEE Access IEEE, 2014 8(2020), Seite 856-866 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:8 year:2020 pages:856-866 https://doi.org/10.1109/ACCESS.2019.2961778 kostenfrei https://doaj.org/article/507618d70fe6451d92125a796a6d8a6c kostenfrei https://ieeexplore.ieee.org/document/8939361/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 8 2020 856-866 |
allfieldsGer |
10.1109/ACCESS.2019.2961778 doi (DE-627)DOAJ056274483 (DE-599)DOAJ507618d70fe6451d92125a796a6d8a6c DE-627 ger DE-627 rakwb eng TK1-9971 Jun Chu verfasserin aut Double-Channel Object Tracking With Position Deviation Suppression 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier The object tracking methods based on multi-domain convolutional neural network (MDNet) commonly fail to track in the case of background clutter. A novel double-channel object tracking (DCOT) is proposed to solve this problem. The discriminative correlation filter (DCF), which has strong discriminative power of low-level features, is employed for the position deviation suppress of the samples generated from MDNet. Firstly the pre-trained deep network is used to learn and classify the target and background in the video frames. If the tracked position of the DCF is judged to be correct, we delete the target candidate samples with high position deviation from MDNet. The position deviation is measured by the distance between the tracked positions of the DCF and MDNet. Finally, MDNet and DCF are updated with a robust update strategy. The experiments are performed on OTB-100 and VOT-2016. The overlap precision and distance precision of DCOT on OTB-100 are 92.2% and 69.5%, respectively, which are higher than those of MDNet by 1.3% and 1.7%. The results of DCOT in background clutter are higher than those of SANet by 0.2% and 2.8%, respectively. DCOT is also superior to other state-of-the-art trackers on VOT-2016. Double-channel object tracking position deviation suppression DCF MDNet Electrical engineering. Electronics. Nuclear engineering Xuji Tu verfasserin aut Lu Leng verfasserin aut Jun Miao verfasserin aut In IEEE Access IEEE, 2014 8(2020), Seite 856-866 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:8 year:2020 pages:856-866 https://doi.org/10.1109/ACCESS.2019.2961778 kostenfrei https://doaj.org/article/507618d70fe6451d92125a796a6d8a6c kostenfrei https://ieeexplore.ieee.org/document/8939361/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 8 2020 856-866 |
allfieldsSound |
10.1109/ACCESS.2019.2961778 doi (DE-627)DOAJ056274483 (DE-599)DOAJ507618d70fe6451d92125a796a6d8a6c DE-627 ger DE-627 rakwb eng TK1-9971 Jun Chu verfasserin aut Double-Channel Object Tracking With Position Deviation Suppression 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier The object tracking methods based on multi-domain convolutional neural network (MDNet) commonly fail to track in the case of background clutter. A novel double-channel object tracking (DCOT) is proposed to solve this problem. The discriminative correlation filter (DCF), which has strong discriminative power of low-level features, is employed for the position deviation suppress of the samples generated from MDNet. Firstly the pre-trained deep network is used to learn and classify the target and background in the video frames. If the tracked position of the DCF is judged to be correct, we delete the target candidate samples with high position deviation from MDNet. The position deviation is measured by the distance between the tracked positions of the DCF and MDNet. Finally, MDNet and DCF are updated with a robust update strategy. The experiments are performed on OTB-100 and VOT-2016. The overlap precision and distance precision of DCOT on OTB-100 are 92.2% and 69.5%, respectively, which are higher than those of MDNet by 1.3% and 1.7%. The results of DCOT in background clutter are higher than those of SANet by 0.2% and 2.8%, respectively. DCOT is also superior to other state-of-the-art trackers on VOT-2016. Double-channel object tracking position deviation suppression DCF MDNet Electrical engineering. Electronics. Nuclear engineering Xuji Tu verfasserin aut Lu Leng verfasserin aut Jun Miao verfasserin aut In IEEE Access IEEE, 2014 8(2020), Seite 856-866 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:8 year:2020 pages:856-866 https://doi.org/10.1109/ACCESS.2019.2961778 kostenfrei https://doaj.org/article/507618d70fe6451d92125a796a6d8a6c kostenfrei https://ieeexplore.ieee.org/document/8939361/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 8 2020 856-866 |
language |
English |
source |
In IEEE Access 8(2020), Seite 856-866 volume:8 year:2020 pages:856-866 |
sourceStr |
In IEEE Access 8(2020), Seite 856-866 volume:8 year:2020 pages:856-866 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Double-channel object tracking position deviation suppression DCF MDNet Electrical engineering. Electronics. Nuclear engineering |
isfreeaccess_bool |
true |
container_title |
IEEE Access |
authorswithroles_txt_mv |
Jun Chu @@aut@@ Xuji Tu @@aut@@ Lu Leng @@aut@@ Jun Miao @@aut@@ |
publishDateDaySort_date |
2020-01-01T00:00:00Z |
hierarchy_top_id |
728440385 |
id |
DOAJ056274483 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ056274483</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230502152339.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230227s2020 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1109/ACCESS.2019.2961778</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ056274483</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ507618d70fe6451d92125a796a6d8a6c</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK1-9971</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Jun Chu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Double-Channel Object Tracking With Position Deviation Suppression</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">The object tracking methods based on multi-domain convolutional neural network (MDNet) commonly fail to track in the case of background clutter. A novel double-channel object tracking (DCOT) is proposed to solve this problem. The discriminative correlation filter (DCF), which has strong discriminative power of low-level features, is employed for the position deviation suppress of the samples generated from MDNet. Firstly the pre-trained deep network is used to learn and classify the target and background in the video frames. If the tracked position of the DCF is judged to be correct, we delete the target candidate samples with high position deviation from MDNet. The position deviation is measured by the distance between the tracked positions of the DCF and MDNet. Finally, MDNet and DCF are updated with a robust update strategy. The experiments are performed on OTB-100 and VOT-2016. The overlap precision and distance precision of DCOT on OTB-100 are 92.2% and 69.5%, respectively, which are higher than those of MDNet by 1.3% and 1.7%. The results of DCOT in background clutter are higher than those of SANet by 0.2% and 2.8%, respectively. DCOT is also superior to other state-of-the-art trackers on VOT-2016.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Double-channel object tracking</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">position deviation suppression</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">DCF</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">MDNet</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electrical engineering. Electronics. Nuclear engineering</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xuji Tu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Lu Leng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Jun Miao</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IEEE Access</subfield><subfield code="d">IEEE, 2014</subfield><subfield code="g">8(2020), Seite 856-866</subfield><subfield code="w">(DE-627)728440385</subfield><subfield code="w">(DE-600)2687964-5</subfield><subfield code="x">21693536</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:8</subfield><subfield code="g">year:2020</subfield><subfield code="g">pages:856-866</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1109/ACCESS.2019.2961778</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/507618d70fe6451d92125a796a6d8a6c</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ieeexplore.ieee.org/document/8939361/</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2169-3536</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">8</subfield><subfield code="j">2020</subfield><subfield code="h">856-866</subfield></datafield></record></collection>
|
callnumber-first |
T - Technology |
author |
Jun Chu |
spellingShingle |
Jun Chu misc TK1-9971 misc Double-channel object tracking misc position deviation suppression misc DCF misc MDNet misc Electrical engineering. Electronics. Nuclear engineering Double-Channel Object Tracking With Position Deviation Suppression |
authorStr |
Jun Chu |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)728440385 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
TK1-9971 |
illustrated |
Not Illustrated |
issn |
21693536 |
topic_title |
TK1-9971 Double-Channel Object Tracking With Position Deviation Suppression Double-channel object tracking position deviation suppression DCF MDNet |
topic |
misc TK1-9971 misc Double-channel object tracking misc position deviation suppression misc DCF misc MDNet misc Electrical engineering. Electronics. Nuclear engineering |
topic_unstemmed |
misc TK1-9971 misc Double-channel object tracking misc position deviation suppression misc DCF misc MDNet misc Electrical engineering. Electronics. Nuclear engineering |
topic_browse |
misc TK1-9971 misc Double-channel object tracking misc position deviation suppression misc DCF misc MDNet misc Electrical engineering. Electronics. Nuclear engineering |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
IEEE Access |
hierarchy_parent_id |
728440385 |
hierarchy_top_title |
IEEE Access |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)728440385 (DE-600)2687964-5 |
title |
Double-Channel Object Tracking With Position Deviation Suppression |
ctrlnum |
(DE-627)DOAJ056274483 (DE-599)DOAJ507618d70fe6451d92125a796a6d8a6c |
title_full |
Double-Channel Object Tracking With Position Deviation Suppression |
author_sort |
Jun Chu |
journal |
IEEE Access |
journalStr |
IEEE Access |
callnumber-first-code |
T |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2020 |
contenttype_str_mv |
txt |
container_start_page |
856 |
author_browse |
Jun Chu Xuji Tu Lu Leng Jun Miao |
container_volume |
8 |
class |
TK1-9971 |
format_se |
Elektronische Aufsätze |
author-letter |
Jun Chu |
doi_str_mv |
10.1109/ACCESS.2019.2961778 |
author2-role |
verfasserin |
title_sort |
double-channel object tracking with position deviation suppression |
callnumber |
TK1-9971 |
title_auth |
Double-Channel Object Tracking With Position Deviation Suppression |
abstract |
The object tracking methods based on multi-domain convolutional neural network (MDNet) commonly fail to track in the case of background clutter. A novel double-channel object tracking (DCOT) is proposed to solve this problem. The discriminative correlation filter (DCF), which has strong discriminative power of low-level features, is employed for the position deviation suppress of the samples generated from MDNet. Firstly the pre-trained deep network is used to learn and classify the target and background in the video frames. If the tracked position of the DCF is judged to be correct, we delete the target candidate samples with high position deviation from MDNet. The position deviation is measured by the distance between the tracked positions of the DCF and MDNet. Finally, MDNet and DCF are updated with a robust update strategy. The experiments are performed on OTB-100 and VOT-2016. The overlap precision and distance precision of DCOT on OTB-100 are 92.2% and 69.5%, respectively, which are higher than those of MDNet by 1.3% and 1.7%. The results of DCOT in background clutter are higher than those of SANet by 0.2% and 2.8%, respectively. DCOT is also superior to other state-of-the-art trackers on VOT-2016. |
abstractGer |
The object tracking methods based on multi-domain convolutional neural network (MDNet) commonly fail to track in the case of background clutter. A novel double-channel object tracking (DCOT) is proposed to solve this problem. The discriminative correlation filter (DCF), which has strong discriminative power of low-level features, is employed for the position deviation suppress of the samples generated from MDNet. Firstly the pre-trained deep network is used to learn and classify the target and background in the video frames. If the tracked position of the DCF is judged to be correct, we delete the target candidate samples with high position deviation from MDNet. The position deviation is measured by the distance between the tracked positions of the DCF and MDNet. Finally, MDNet and DCF are updated with a robust update strategy. The experiments are performed on OTB-100 and VOT-2016. The overlap precision and distance precision of DCOT on OTB-100 are 92.2% and 69.5%, respectively, which are higher than those of MDNet by 1.3% and 1.7%. The results of DCOT in background clutter are higher than those of SANet by 0.2% and 2.8%, respectively. DCOT is also superior to other state-of-the-art trackers on VOT-2016. |
abstract_unstemmed |
The object tracking methods based on multi-domain convolutional neural network (MDNet) commonly fail to track in the case of background clutter. A novel double-channel object tracking (DCOT) is proposed to solve this problem. The discriminative correlation filter (DCF), which has strong discriminative power of low-level features, is employed for the position deviation suppress of the samples generated from MDNet. Firstly the pre-trained deep network is used to learn and classify the target and background in the video frames. If the tracked position of the DCF is judged to be correct, we delete the target candidate samples with high position deviation from MDNet. The position deviation is measured by the distance between the tracked positions of the DCF and MDNet. Finally, MDNet and DCF are updated with a robust update strategy. The experiments are performed on OTB-100 and VOT-2016. The overlap precision and distance precision of DCOT on OTB-100 are 92.2% and 69.5%, respectively, which are higher than those of MDNet by 1.3% and 1.7%. The results of DCOT in background clutter are higher than those of SANet by 0.2% and 2.8%, respectively. DCOT is also superior to other state-of-the-art trackers on VOT-2016. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
title_short |
Double-Channel Object Tracking With Position Deviation Suppression |
url |
https://doi.org/10.1109/ACCESS.2019.2961778 https://doaj.org/article/507618d70fe6451d92125a796a6d8a6c https://ieeexplore.ieee.org/document/8939361/ https://doaj.org/toc/2169-3536 |
remote_bool |
true |
author2 |
Xuji Tu Lu Leng Jun Miao |
author2Str |
Xuji Tu Lu Leng Jun Miao |
ppnlink |
728440385 |
callnumber-subject |
TK - Electrical and Nuclear Engineering |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1109/ACCESS.2019.2961778 |
callnumber-a |
TK1-9971 |
up_date |
2024-07-03T19:53:59.683Z |
_version_ |
1803588929785430016 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ056274483</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230502152339.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230227s2020 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1109/ACCESS.2019.2961778</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ056274483</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ507618d70fe6451d92125a796a6d8a6c</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK1-9971</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Jun Chu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Double-Channel Object Tracking With Position Deviation Suppression</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">The object tracking methods based on multi-domain convolutional neural network (MDNet) commonly fail to track in the case of background clutter. A novel double-channel object tracking (DCOT) is proposed to solve this problem. The discriminative correlation filter (DCF), which has strong discriminative power of low-level features, is employed for the position deviation suppress of the samples generated from MDNet. Firstly the pre-trained deep network is used to learn and classify the target and background in the video frames. If the tracked position of the DCF is judged to be correct, we delete the target candidate samples with high position deviation from MDNet. The position deviation is measured by the distance between the tracked positions of the DCF and MDNet. Finally, MDNet and DCF are updated with a robust update strategy. The experiments are performed on OTB-100 and VOT-2016. The overlap precision and distance precision of DCOT on OTB-100 are 92.2% and 69.5%, respectively, which are higher than those of MDNet by 1.3% and 1.7%. The results of DCOT in background clutter are higher than those of SANet by 0.2% and 2.8%, respectively. DCOT is also superior to other state-of-the-art trackers on VOT-2016.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Double-channel object tracking</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">position deviation suppression</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">DCF</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">MDNet</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electrical engineering. Electronics. Nuclear engineering</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xuji Tu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Lu Leng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Jun Miao</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IEEE Access</subfield><subfield code="d">IEEE, 2014</subfield><subfield code="g">8(2020), Seite 856-866</subfield><subfield code="w">(DE-627)728440385</subfield><subfield code="w">(DE-600)2687964-5</subfield><subfield code="x">21693536</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:8</subfield><subfield code="g">year:2020</subfield><subfield code="g">pages:856-866</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1109/ACCESS.2019.2961778</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/507618d70fe6451d92125a796a6d8a6c</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ieeexplore.ieee.org/document/8939361/</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2169-3536</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">8</subfield><subfield code="j">2020</subfield><subfield code="h">856-866</subfield></datafield></record></collection>
|
score |
7.399102 |