Regularizing deep networks with label geometry for accurate object localization on small training datasets
Localization is a critical subtask in object detection, which is closely related to spatial information of objects. Most current detectors simply rely on the fitting ability of deep neural networks to regress towards numerical targets such as coordinates of object boxes. Training deep networks for s...
Ausführliche Beschreibung
Autor*in: |
Wang, Xiaolian [verfasserIn] Hu, Xiyuan [verfasserIn] Chen, Chen [verfasserIn] Peng, Silong [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
Enthalten in: Pattern recognition letters - Amsterdam [u.a.] : Elsevier, 1982, 154, Seite 53-59 |
---|---|
Übergeordnetes Werk: |
volume:154 ; pages:53-59 |
DOI / URN: |
10.1016/j.patrec.2022.01.004 |
---|
Katalog-ID: |
ELV007419163 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV007419163 | ||
003 | DE-627 | ||
005 | 20230524145940.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230507s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.patrec.2022.01.004 |2 doi | |
035 | |a (DE-627)ELV007419163 | ||
035 | |a (ELSEVIER)S0167-8655(22)00004-6 | ||
040 | |a DE-627 |b ger |c DE-627 |e rda | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |q DE-600 |
084 | |a 54.74 |2 bkl | ||
100 | 1 | |a Wang, Xiaolian |e verfasserin |4 aut | |
245 | 1 | 0 | |a Regularizing deep networks with label geometry for accurate object localization on small training datasets |
264 | 1 | |c 2022 | |
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Localization is a critical subtask in object detection, which is closely related to spatial information of objects. Most current detectors simply rely on the fitting ability of deep neural networks to regress towards numerical targets such as coordinates of object boxes. Training deep networks for sufficient fitting ability requires a large number of annotations that are expensive to obtain. In this work, we fully exploit limited annotations by extracting label geometry to improve localization performance on small datasets. We generate distance transform of bounding box edges according to localization labels, with which we supervise intermediate outputs of networks pixel by pixel to reconstruct object geometry for localization. Distance transform is sensitive to box edges and provides geometric gradients flowing into boundaries. We learn such gradients to enhance geometric-aware features through a coupled training with regression, and use it to refine regressed boxes in an evolutionary manner in inference. Extensive experiments are implemented to demonstrate the effectiveness of our method. Our method can be applied in applications that required human-machine interaction, such as the driver-assistance system in autonomous driving, by providing accurate detections to assist humans in making better decisions. | ||
650 | 4 | |a Object detection | |
650 | 4 | |a Object localization | |
650 | 4 | |a Label geometry | |
650 | 4 | |a Box evolution | |
650 | 4 | |a Small dataset | |
650 | 4 | |a Human-machine interaction | |
700 | 1 | |a Hu, Xiyuan |e verfasserin |4 aut | |
700 | 1 | |a Chen, Chen |e verfasserin |4 aut | |
700 | 1 | |a Peng, Silong |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Pattern recognition letters |d Amsterdam [u.a.] : Elsevier, 1982 |g 154, Seite 53-59 |h Online-Ressource |w (DE-627)265784123 |w (DE-600)1466342-9 |w (DE-576)074891006 |x 0167-8655 |7 nnns |
773 | 1 | 8 | |g volume:154 |g pages:53-59 |
912 | |a GBV_USEFLAG_U | ||
912 | |a SYSFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_101 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_150 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2038 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2065 | ||
912 | |a GBV_ILN_2068 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2113 | ||
912 | |a GBV_ILN_2118 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2147 | ||
912 | |a GBV_ILN_2148 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2190 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_2522 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
936 | b | k | |a 54.74 |j Maschinelles Sehen |
951 | |a AR | ||
952 | |d 154 |h 53-59 |
author_variant |
x w xw x h xh c c cc s p sp |
---|---|
matchkey_str |
article:01678655:2022----::euaiigepewrsihaegoerfrcuaebetoaiai |
hierarchy_sort_str |
2022 |
bklnumber |
54.74 |
publishDate |
2022 |
allfields |
10.1016/j.patrec.2022.01.004 doi (DE-627)ELV007419163 (ELSEVIER)S0167-8655(22)00004-6 DE-627 ger DE-627 rda eng 004 DE-600 54.74 bkl Wang, Xiaolian verfasserin aut Regularizing deep networks with label geometry for accurate object localization on small training datasets 2022 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Localization is a critical subtask in object detection, which is closely related to spatial information of objects. Most current detectors simply rely on the fitting ability of deep neural networks to regress towards numerical targets such as coordinates of object boxes. Training deep networks for sufficient fitting ability requires a large number of annotations that are expensive to obtain. In this work, we fully exploit limited annotations by extracting label geometry to improve localization performance on small datasets. We generate distance transform of bounding box edges according to localization labels, with which we supervise intermediate outputs of networks pixel by pixel to reconstruct object geometry for localization. Distance transform is sensitive to box edges and provides geometric gradients flowing into boundaries. We learn such gradients to enhance geometric-aware features through a coupled training with regression, and use it to refine regressed boxes in an evolutionary manner in inference. Extensive experiments are implemented to demonstrate the effectiveness of our method. Our method can be applied in applications that required human-machine interaction, such as the driver-assistance system in autonomous driving, by providing accurate detections to assist humans in making better decisions. Object detection Object localization Label geometry Box evolution Small dataset Human-machine interaction Hu, Xiyuan verfasserin aut Chen, Chen verfasserin aut Peng, Silong verfasserin aut Enthalten in Pattern recognition letters Amsterdam [u.a.] : Elsevier, 1982 154, Seite 53-59 Online-Ressource (DE-627)265784123 (DE-600)1466342-9 (DE-576)074891006 0167-8655 nnns volume:154 pages:53-59 GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 54.74 Maschinelles Sehen AR 154 53-59 |
spelling |
10.1016/j.patrec.2022.01.004 doi (DE-627)ELV007419163 (ELSEVIER)S0167-8655(22)00004-6 DE-627 ger DE-627 rda eng 004 DE-600 54.74 bkl Wang, Xiaolian verfasserin aut Regularizing deep networks with label geometry for accurate object localization on small training datasets 2022 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Localization is a critical subtask in object detection, which is closely related to spatial information of objects. Most current detectors simply rely on the fitting ability of deep neural networks to regress towards numerical targets such as coordinates of object boxes. Training deep networks for sufficient fitting ability requires a large number of annotations that are expensive to obtain. In this work, we fully exploit limited annotations by extracting label geometry to improve localization performance on small datasets. We generate distance transform of bounding box edges according to localization labels, with which we supervise intermediate outputs of networks pixel by pixel to reconstruct object geometry for localization. Distance transform is sensitive to box edges and provides geometric gradients flowing into boundaries. We learn such gradients to enhance geometric-aware features through a coupled training with regression, and use it to refine regressed boxes in an evolutionary manner in inference. Extensive experiments are implemented to demonstrate the effectiveness of our method. Our method can be applied in applications that required human-machine interaction, such as the driver-assistance system in autonomous driving, by providing accurate detections to assist humans in making better decisions. Object detection Object localization Label geometry Box evolution Small dataset Human-machine interaction Hu, Xiyuan verfasserin aut Chen, Chen verfasserin aut Peng, Silong verfasserin aut Enthalten in Pattern recognition letters Amsterdam [u.a.] : Elsevier, 1982 154, Seite 53-59 Online-Ressource (DE-627)265784123 (DE-600)1466342-9 (DE-576)074891006 0167-8655 nnns volume:154 pages:53-59 GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 54.74 Maschinelles Sehen AR 154 53-59 |
allfields_unstemmed |
10.1016/j.patrec.2022.01.004 doi (DE-627)ELV007419163 (ELSEVIER)S0167-8655(22)00004-6 DE-627 ger DE-627 rda eng 004 DE-600 54.74 bkl Wang, Xiaolian verfasserin aut Regularizing deep networks with label geometry for accurate object localization on small training datasets 2022 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Localization is a critical subtask in object detection, which is closely related to spatial information of objects. Most current detectors simply rely on the fitting ability of deep neural networks to regress towards numerical targets such as coordinates of object boxes. Training deep networks for sufficient fitting ability requires a large number of annotations that are expensive to obtain. In this work, we fully exploit limited annotations by extracting label geometry to improve localization performance on small datasets. We generate distance transform of bounding box edges according to localization labels, with which we supervise intermediate outputs of networks pixel by pixel to reconstruct object geometry for localization. Distance transform is sensitive to box edges and provides geometric gradients flowing into boundaries. We learn such gradients to enhance geometric-aware features through a coupled training with regression, and use it to refine regressed boxes in an evolutionary manner in inference. Extensive experiments are implemented to demonstrate the effectiveness of our method. Our method can be applied in applications that required human-machine interaction, such as the driver-assistance system in autonomous driving, by providing accurate detections to assist humans in making better decisions. Object detection Object localization Label geometry Box evolution Small dataset Human-machine interaction Hu, Xiyuan verfasserin aut Chen, Chen verfasserin aut Peng, Silong verfasserin aut Enthalten in Pattern recognition letters Amsterdam [u.a.] : Elsevier, 1982 154, Seite 53-59 Online-Ressource (DE-627)265784123 (DE-600)1466342-9 (DE-576)074891006 0167-8655 nnns volume:154 pages:53-59 GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 54.74 Maschinelles Sehen AR 154 53-59 |
allfieldsGer |
10.1016/j.patrec.2022.01.004 doi (DE-627)ELV007419163 (ELSEVIER)S0167-8655(22)00004-6 DE-627 ger DE-627 rda eng 004 DE-600 54.74 bkl Wang, Xiaolian verfasserin aut Regularizing deep networks with label geometry for accurate object localization on small training datasets 2022 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Localization is a critical subtask in object detection, which is closely related to spatial information of objects. Most current detectors simply rely on the fitting ability of deep neural networks to regress towards numerical targets such as coordinates of object boxes. Training deep networks for sufficient fitting ability requires a large number of annotations that are expensive to obtain. In this work, we fully exploit limited annotations by extracting label geometry to improve localization performance on small datasets. We generate distance transform of bounding box edges according to localization labels, with which we supervise intermediate outputs of networks pixel by pixel to reconstruct object geometry for localization. Distance transform is sensitive to box edges and provides geometric gradients flowing into boundaries. We learn such gradients to enhance geometric-aware features through a coupled training with regression, and use it to refine regressed boxes in an evolutionary manner in inference. Extensive experiments are implemented to demonstrate the effectiveness of our method. Our method can be applied in applications that required human-machine interaction, such as the driver-assistance system in autonomous driving, by providing accurate detections to assist humans in making better decisions. Object detection Object localization Label geometry Box evolution Small dataset Human-machine interaction Hu, Xiyuan verfasserin aut Chen, Chen verfasserin aut Peng, Silong verfasserin aut Enthalten in Pattern recognition letters Amsterdam [u.a.] : Elsevier, 1982 154, Seite 53-59 Online-Ressource (DE-627)265784123 (DE-600)1466342-9 (DE-576)074891006 0167-8655 nnns volume:154 pages:53-59 GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 54.74 Maschinelles Sehen AR 154 53-59 |
allfieldsSound |
10.1016/j.patrec.2022.01.004 doi (DE-627)ELV007419163 (ELSEVIER)S0167-8655(22)00004-6 DE-627 ger DE-627 rda eng 004 DE-600 54.74 bkl Wang, Xiaolian verfasserin aut Regularizing deep networks with label geometry for accurate object localization on small training datasets 2022 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Localization is a critical subtask in object detection, which is closely related to spatial information of objects. Most current detectors simply rely on the fitting ability of deep neural networks to regress towards numerical targets such as coordinates of object boxes. Training deep networks for sufficient fitting ability requires a large number of annotations that are expensive to obtain. In this work, we fully exploit limited annotations by extracting label geometry to improve localization performance on small datasets. We generate distance transform of bounding box edges according to localization labels, with which we supervise intermediate outputs of networks pixel by pixel to reconstruct object geometry for localization. Distance transform is sensitive to box edges and provides geometric gradients flowing into boundaries. We learn such gradients to enhance geometric-aware features through a coupled training with regression, and use it to refine regressed boxes in an evolutionary manner in inference. Extensive experiments are implemented to demonstrate the effectiveness of our method. Our method can be applied in applications that required human-machine interaction, such as the driver-assistance system in autonomous driving, by providing accurate detections to assist humans in making better decisions. Object detection Object localization Label geometry Box evolution Small dataset Human-machine interaction Hu, Xiyuan verfasserin aut Chen, Chen verfasserin aut Peng, Silong verfasserin aut Enthalten in Pattern recognition letters Amsterdam [u.a.] : Elsevier, 1982 154, Seite 53-59 Online-Ressource (DE-627)265784123 (DE-600)1466342-9 (DE-576)074891006 0167-8655 nnns volume:154 pages:53-59 GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 54.74 Maschinelles Sehen AR 154 53-59 |
language |
English |
source |
Enthalten in Pattern recognition letters 154, Seite 53-59 volume:154 pages:53-59 |
sourceStr |
Enthalten in Pattern recognition letters 154, Seite 53-59 volume:154 pages:53-59 |
format_phy_str_mv |
Article |
bklname |
Maschinelles Sehen |
institution |
findex.gbv.de |
topic_facet |
Object detection Object localization Label geometry Box evolution Small dataset Human-machine interaction |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
Pattern recognition letters |
authorswithroles_txt_mv |
Wang, Xiaolian @@aut@@ Hu, Xiyuan @@aut@@ Chen, Chen @@aut@@ Peng, Silong @@aut@@ |
publishDateDaySort_date |
2022-01-01T00:00:00Z |
hierarchy_top_id |
265784123 |
dewey-sort |
14 |
id |
ELV007419163 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV007419163</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230524145940.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230507s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.patrec.2022.01.004</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV007419163</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0167-8655(22)00004-6</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">DE-600</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.74</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Wang, Xiaolian</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Regularizing deep networks with label geometry for accurate object localization on small training datasets</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Localization is a critical subtask in object detection, which is closely related to spatial information of objects. Most current detectors simply rely on the fitting ability of deep neural networks to regress towards numerical targets such as coordinates of object boxes. Training deep networks for sufficient fitting ability requires a large number of annotations that are expensive to obtain. In this work, we fully exploit limited annotations by extracting label geometry to improve localization performance on small datasets. We generate distance transform of bounding box edges according to localization labels, with which we supervise intermediate outputs of networks pixel by pixel to reconstruct object geometry for localization. Distance transform is sensitive to box edges and provides geometric gradients flowing into boundaries. We learn such gradients to enhance geometric-aware features through a coupled training with regression, and use it to refine regressed boxes in an evolutionary manner in inference. Extensive experiments are implemented to demonstrate the effectiveness of our method. Our method can be applied in applications that required human-machine interaction, such as the driver-assistance system in autonomous driving, by providing accurate detections to assist humans in making better decisions.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Object detection</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Object localization</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Label geometry</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Box evolution</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Small dataset</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Human-machine interaction</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Hu, Xiyuan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Chen, Chen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Peng, Silong</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Pattern recognition letters</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier, 1982</subfield><subfield code="g">154, Seite 53-59</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)265784123</subfield><subfield code="w">(DE-600)1466342-9</subfield><subfield code="w">(DE-576)074891006</subfield><subfield code="x">0167-8655</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:154</subfield><subfield code="g">pages:53-59</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.74</subfield><subfield code="j">Maschinelles Sehen</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">154</subfield><subfield code="h">53-59</subfield></datafield></record></collection>
|
author |
Wang, Xiaolian |
spellingShingle |
Wang, Xiaolian ddc 004 bkl 54.74 misc Object detection misc Object localization misc Label geometry misc Box evolution misc Small dataset misc Human-machine interaction Regularizing deep networks with label geometry for accurate object localization on small training datasets |
authorStr |
Wang, Xiaolian |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)265784123 |
format |
electronic Article |
dewey-ones |
004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
0167-8655 |
topic_title |
004 DE-600 54.74 bkl Regularizing deep networks with label geometry for accurate object localization on small training datasets Object detection Object localization Label geometry Box evolution Small dataset Human-machine interaction |
topic |
ddc 004 bkl 54.74 misc Object detection misc Object localization misc Label geometry misc Box evolution misc Small dataset misc Human-machine interaction |
topic_unstemmed |
ddc 004 bkl 54.74 misc Object detection misc Object localization misc Label geometry misc Box evolution misc Small dataset misc Human-machine interaction |
topic_browse |
ddc 004 bkl 54.74 misc Object detection misc Object localization misc Label geometry misc Box evolution misc Small dataset misc Human-machine interaction |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Pattern recognition letters |
hierarchy_parent_id |
265784123 |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
Pattern recognition letters |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)265784123 (DE-600)1466342-9 (DE-576)074891006 |
title |
Regularizing deep networks with label geometry for accurate object localization on small training datasets |
ctrlnum |
(DE-627)ELV007419163 (ELSEVIER)S0167-8655(22)00004-6 |
title_full |
Regularizing deep networks with label geometry for accurate object localization on small training datasets |
author_sort |
Wang, Xiaolian |
journal |
Pattern recognition letters |
journalStr |
Pattern recognition letters |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
zzz |
container_start_page |
53 |
author_browse |
Wang, Xiaolian Hu, Xiyuan Chen, Chen Peng, Silong |
container_volume |
154 |
class |
004 DE-600 54.74 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Wang, Xiaolian |
doi_str_mv |
10.1016/j.patrec.2022.01.004 |
dewey-full |
004 |
author2-role |
verfasserin |
title_sort |
regularizing deep networks with label geometry for accurate object localization on small training datasets |
title_auth |
Regularizing deep networks with label geometry for accurate object localization on small training datasets |
abstract |
Localization is a critical subtask in object detection, which is closely related to spatial information of objects. Most current detectors simply rely on the fitting ability of deep neural networks to regress towards numerical targets such as coordinates of object boxes. Training deep networks for sufficient fitting ability requires a large number of annotations that are expensive to obtain. In this work, we fully exploit limited annotations by extracting label geometry to improve localization performance on small datasets. We generate distance transform of bounding box edges according to localization labels, with which we supervise intermediate outputs of networks pixel by pixel to reconstruct object geometry for localization. Distance transform is sensitive to box edges and provides geometric gradients flowing into boundaries. We learn such gradients to enhance geometric-aware features through a coupled training with regression, and use it to refine regressed boxes in an evolutionary manner in inference. Extensive experiments are implemented to demonstrate the effectiveness of our method. Our method can be applied in applications that required human-machine interaction, such as the driver-assistance system in autonomous driving, by providing accurate detections to assist humans in making better decisions. |
abstractGer |
Localization is a critical subtask in object detection, which is closely related to spatial information of objects. Most current detectors simply rely on the fitting ability of deep neural networks to regress towards numerical targets such as coordinates of object boxes. Training deep networks for sufficient fitting ability requires a large number of annotations that are expensive to obtain. In this work, we fully exploit limited annotations by extracting label geometry to improve localization performance on small datasets. We generate distance transform of bounding box edges according to localization labels, with which we supervise intermediate outputs of networks pixel by pixel to reconstruct object geometry for localization. Distance transform is sensitive to box edges and provides geometric gradients flowing into boundaries. We learn such gradients to enhance geometric-aware features through a coupled training with regression, and use it to refine regressed boxes in an evolutionary manner in inference. Extensive experiments are implemented to demonstrate the effectiveness of our method. Our method can be applied in applications that required human-machine interaction, such as the driver-assistance system in autonomous driving, by providing accurate detections to assist humans in making better decisions. |
abstract_unstemmed |
Localization is a critical subtask in object detection, which is closely related to spatial information of objects. Most current detectors simply rely on the fitting ability of deep neural networks to regress towards numerical targets such as coordinates of object boxes. Training deep networks for sufficient fitting ability requires a large number of annotations that are expensive to obtain. In this work, we fully exploit limited annotations by extracting label geometry to improve localization performance on small datasets. We generate distance transform of bounding box edges according to localization labels, with which we supervise intermediate outputs of networks pixel by pixel to reconstruct object geometry for localization. Distance transform is sensitive to box edges and provides geometric gradients flowing into boundaries. We learn such gradients to enhance geometric-aware features through a coupled training with regression, and use it to refine regressed boxes in an evolutionary manner in inference. Extensive experiments are implemented to demonstrate the effectiveness of our method. Our method can be applied in applications that required human-machine interaction, such as the driver-assistance system in autonomous driving, by providing accurate detections to assist humans in making better decisions. |
collection_details |
GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 |
title_short |
Regularizing deep networks with label geometry for accurate object localization on small training datasets |
remote_bool |
true |
author2 |
Hu, Xiyuan Chen, Chen Peng, Silong |
author2Str |
Hu, Xiyuan Chen, Chen Peng, Silong |
ppnlink |
265784123 |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1016/j.patrec.2022.01.004 |
up_date |
2024-07-07T00:41:02.846Z |
_version_ |
1803878780474753024 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV007419163</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230524145940.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230507s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.patrec.2022.01.004</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV007419163</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0167-8655(22)00004-6</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">DE-600</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.74</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Wang, Xiaolian</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Regularizing deep networks with label geometry for accurate object localization on small training datasets</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Localization is a critical subtask in object detection, which is closely related to spatial information of objects. Most current detectors simply rely on the fitting ability of deep neural networks to regress towards numerical targets such as coordinates of object boxes. Training deep networks for sufficient fitting ability requires a large number of annotations that are expensive to obtain. In this work, we fully exploit limited annotations by extracting label geometry to improve localization performance on small datasets. We generate distance transform of bounding box edges according to localization labels, with which we supervise intermediate outputs of networks pixel by pixel to reconstruct object geometry for localization. Distance transform is sensitive to box edges and provides geometric gradients flowing into boundaries. We learn such gradients to enhance geometric-aware features through a coupled training with regression, and use it to refine regressed boxes in an evolutionary manner in inference. Extensive experiments are implemented to demonstrate the effectiveness of our method. Our method can be applied in applications that required human-machine interaction, such as the driver-assistance system in autonomous driving, by providing accurate detections to assist humans in making better decisions.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Object detection</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Object localization</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Label geometry</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Box evolution</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Small dataset</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Human-machine interaction</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Hu, Xiyuan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Chen, Chen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Peng, Silong</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Pattern recognition letters</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier, 1982</subfield><subfield code="g">154, Seite 53-59</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)265784123</subfield><subfield code="w">(DE-600)1466342-9</subfield><subfield code="w">(DE-576)074891006</subfield><subfield code="x">0167-8655</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:154</subfield><subfield code="g">pages:53-59</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.74</subfield><subfield code="j">Maschinelles Sehen</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">154</subfield><subfield code="h">53-59</subfield></datafield></record></collection>
|
score |
7.39812 |