Attention-based cropping and erasing learning with coarse-to-fine refinement for fine-grained visual classification
Fine-grained visual classification is challenging due to similarities within classes and discriminative features located in subtle regions. Conventional methods focus on extracting features from the most discriminative parts, which may underperform when these parts are occluded or invisible. And the...
Ausführliche Beschreibung
Autor*in: |
Chen, Jianpin [verfasserIn] Li, Heng [verfasserIn] Liang, Junlin [verfasserIn] Su, Xiaofan [verfasserIn] Zhai, Zhenzhen [verfasserIn] Chai, Xinyu [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022 |
---|
Schlagwörter: |
Fine-grained visual classification |
---|
Übergeordnetes Werk: |
Enthalten in: Neurocomputing - Amsterdam : Elsevier, 1989, 501, Seite 359-369 |
---|---|
Übergeordnetes Werk: |
volume:501 ; pages:359-369 |
DOI / URN: |
10.1016/j.neucom.2022.06.041 |
---|
Katalog-ID: |
ELV008182124 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV008182124 | ||
003 | DE-627 | ||
005 | 20230524130537.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230508s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.neucom.2022.06.041 |2 doi | |
035 | |a (DE-627)ELV008182124 | ||
035 | |a (ELSEVIER)S0925-2312(22)00760-3 | ||
040 | |a DE-627 |b ger |c DE-627 |e rda | ||
041 | |a eng | ||
082 | 0 | 4 | |a 610 |q DE-600 |
084 | |a 54.72 |2 bkl | ||
100 | 1 | |a Chen, Jianpin |e verfasserin |4 aut | |
245 | 1 | 0 | |a Attention-based cropping and erasing learning with coarse-to-fine refinement for fine-grained visual classification |
264 | 1 | |c 2022 | |
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Fine-grained visual classification is challenging due to similarities within classes and discriminative features located in subtle regions. Conventional methods focus on extracting features from the most discriminative parts, which may underperform when these parts are occluded or invisible. And the limited training data also leads to serious overfitting problem. In this paper, we propose an Attention-based Cropping and Erasing Network (ACEN) with a coarse-to-fine refinement strategy to address these problems. By convolving the feature maps from CNN, we obtain a set of attention maps which focus on discriminative object parts. Guided by the attention maps, we propose attention region cropping and erasing operations to augment training data. Moreover, the attention region cropping enhances local discriminative feature learning, and the attention region erasing promotes multi-attention learning. During inference phase, the coarse-to-fine refinement strategy is proposed to refine the model prediction. Extensive experiments demonstrate that our approach achieves state-of-the-art performance on challenging benchmarks, including CUB-200-2011, FGVC-Aircraft and Stanford Cars. | ||
650 | 4 | |a Fine-grained visual classification | |
650 | 4 | |a Attention-based data augmentation | |
650 | 4 | |a Coarse-to-fine refinement | |
700 | 1 | |a Li, Heng |e verfasserin |4 aut | |
700 | 1 | |a Liang, Junlin |e verfasserin |4 aut | |
700 | 1 | |a Su, Xiaofan |e verfasserin |4 aut | |
700 | 1 | |a Zhai, Zhenzhen |e verfasserin |4 aut | |
700 | 1 | |a Chai, Xinyu |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Neurocomputing |d Amsterdam : Elsevier, 1989 |g 501, Seite 359-369 |h Online-Ressource |w (DE-627)271176008 |w (DE-600)1479006-3 |w (DE-576)078412358 |x 1872-8286 |7 nnns |
773 | 1 | 8 | |g volume:501 |g pages:359-369 |
912 | |a GBV_USEFLAG_U | ||
912 | |a SYSFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SSG-OLC-PHA | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_101 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_150 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2008 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2038 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2065 | ||
912 | |a GBV_ILN_2068 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2113 | ||
912 | |a GBV_ILN_2118 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2147 | ||
912 | |a GBV_ILN_2148 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2190 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_2522 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4046 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
936 | b | k | |a 54.72 |j Künstliche Intelligenz |
951 | |a AR | ||
952 | |d 501 |h 359-369 |
author_variant |
j c jc h l hl j l jl x s xs z z zz x c xc |
---|---|
matchkey_str |
article:18728286:2022----::tetobsdrpigneaigerigihoreoieeieetofng |
hierarchy_sort_str |
2022 |
bklnumber |
54.72 |
publishDate |
2022 |
allfields |
10.1016/j.neucom.2022.06.041 doi (DE-627)ELV008182124 (ELSEVIER)S0925-2312(22)00760-3 DE-627 ger DE-627 rda eng 610 DE-600 54.72 bkl Chen, Jianpin verfasserin aut Attention-based cropping and erasing learning with coarse-to-fine refinement for fine-grained visual classification 2022 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Fine-grained visual classification is challenging due to similarities within classes and discriminative features located in subtle regions. Conventional methods focus on extracting features from the most discriminative parts, which may underperform when these parts are occluded or invisible. And the limited training data also leads to serious overfitting problem. In this paper, we propose an Attention-based Cropping and Erasing Network (ACEN) with a coarse-to-fine refinement strategy to address these problems. By convolving the feature maps from CNN, we obtain a set of attention maps which focus on discriminative object parts. Guided by the attention maps, we propose attention region cropping and erasing operations to augment training data. Moreover, the attention region cropping enhances local discriminative feature learning, and the attention region erasing promotes multi-attention learning. During inference phase, the coarse-to-fine refinement strategy is proposed to refine the model prediction. Extensive experiments demonstrate that our approach achieves state-of-the-art performance on challenging benchmarks, including CUB-200-2011, FGVC-Aircraft and Stanford Cars. Fine-grained visual classification Attention-based data augmentation Coarse-to-fine refinement Li, Heng verfasserin aut Liang, Junlin verfasserin aut Su, Xiaofan verfasserin aut Zhai, Zhenzhen verfasserin aut Chai, Xinyu verfasserin aut Enthalten in Neurocomputing Amsterdam : Elsevier, 1989 501, Seite 359-369 Online-Ressource (DE-627)271176008 (DE-600)1479006-3 (DE-576)078412358 1872-8286 nnns volume:501 pages:359-369 GBV_USEFLAG_U SYSFLAG_U GBV_ELV SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2008 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 54.72 Künstliche Intelligenz AR 501 359-369 |
spelling |
10.1016/j.neucom.2022.06.041 doi (DE-627)ELV008182124 (ELSEVIER)S0925-2312(22)00760-3 DE-627 ger DE-627 rda eng 610 DE-600 54.72 bkl Chen, Jianpin verfasserin aut Attention-based cropping and erasing learning with coarse-to-fine refinement for fine-grained visual classification 2022 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Fine-grained visual classification is challenging due to similarities within classes and discriminative features located in subtle regions. Conventional methods focus on extracting features from the most discriminative parts, which may underperform when these parts are occluded or invisible. And the limited training data also leads to serious overfitting problem. In this paper, we propose an Attention-based Cropping and Erasing Network (ACEN) with a coarse-to-fine refinement strategy to address these problems. By convolving the feature maps from CNN, we obtain a set of attention maps which focus on discriminative object parts. Guided by the attention maps, we propose attention region cropping and erasing operations to augment training data. Moreover, the attention region cropping enhances local discriminative feature learning, and the attention region erasing promotes multi-attention learning. During inference phase, the coarse-to-fine refinement strategy is proposed to refine the model prediction. Extensive experiments demonstrate that our approach achieves state-of-the-art performance on challenging benchmarks, including CUB-200-2011, FGVC-Aircraft and Stanford Cars. Fine-grained visual classification Attention-based data augmentation Coarse-to-fine refinement Li, Heng verfasserin aut Liang, Junlin verfasserin aut Su, Xiaofan verfasserin aut Zhai, Zhenzhen verfasserin aut Chai, Xinyu verfasserin aut Enthalten in Neurocomputing Amsterdam : Elsevier, 1989 501, Seite 359-369 Online-Ressource (DE-627)271176008 (DE-600)1479006-3 (DE-576)078412358 1872-8286 nnns volume:501 pages:359-369 GBV_USEFLAG_U SYSFLAG_U GBV_ELV SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2008 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 54.72 Künstliche Intelligenz AR 501 359-369 |
allfields_unstemmed |
10.1016/j.neucom.2022.06.041 doi (DE-627)ELV008182124 (ELSEVIER)S0925-2312(22)00760-3 DE-627 ger DE-627 rda eng 610 DE-600 54.72 bkl Chen, Jianpin verfasserin aut Attention-based cropping and erasing learning with coarse-to-fine refinement for fine-grained visual classification 2022 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Fine-grained visual classification is challenging due to similarities within classes and discriminative features located in subtle regions. Conventional methods focus on extracting features from the most discriminative parts, which may underperform when these parts are occluded or invisible. And the limited training data also leads to serious overfitting problem. In this paper, we propose an Attention-based Cropping and Erasing Network (ACEN) with a coarse-to-fine refinement strategy to address these problems. By convolving the feature maps from CNN, we obtain a set of attention maps which focus on discriminative object parts. Guided by the attention maps, we propose attention region cropping and erasing operations to augment training data. Moreover, the attention region cropping enhances local discriminative feature learning, and the attention region erasing promotes multi-attention learning. During inference phase, the coarse-to-fine refinement strategy is proposed to refine the model prediction. Extensive experiments demonstrate that our approach achieves state-of-the-art performance on challenging benchmarks, including CUB-200-2011, FGVC-Aircraft and Stanford Cars. Fine-grained visual classification Attention-based data augmentation Coarse-to-fine refinement Li, Heng verfasserin aut Liang, Junlin verfasserin aut Su, Xiaofan verfasserin aut Zhai, Zhenzhen verfasserin aut Chai, Xinyu verfasserin aut Enthalten in Neurocomputing Amsterdam : Elsevier, 1989 501, Seite 359-369 Online-Ressource (DE-627)271176008 (DE-600)1479006-3 (DE-576)078412358 1872-8286 nnns volume:501 pages:359-369 GBV_USEFLAG_U SYSFLAG_U GBV_ELV SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2008 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 54.72 Künstliche Intelligenz AR 501 359-369 |
allfieldsGer |
10.1016/j.neucom.2022.06.041 doi (DE-627)ELV008182124 (ELSEVIER)S0925-2312(22)00760-3 DE-627 ger DE-627 rda eng 610 DE-600 54.72 bkl Chen, Jianpin verfasserin aut Attention-based cropping and erasing learning with coarse-to-fine refinement for fine-grained visual classification 2022 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Fine-grained visual classification is challenging due to similarities within classes and discriminative features located in subtle regions. Conventional methods focus on extracting features from the most discriminative parts, which may underperform when these parts are occluded or invisible. And the limited training data also leads to serious overfitting problem. In this paper, we propose an Attention-based Cropping and Erasing Network (ACEN) with a coarse-to-fine refinement strategy to address these problems. By convolving the feature maps from CNN, we obtain a set of attention maps which focus on discriminative object parts. Guided by the attention maps, we propose attention region cropping and erasing operations to augment training data. Moreover, the attention region cropping enhances local discriminative feature learning, and the attention region erasing promotes multi-attention learning. During inference phase, the coarse-to-fine refinement strategy is proposed to refine the model prediction. Extensive experiments demonstrate that our approach achieves state-of-the-art performance on challenging benchmarks, including CUB-200-2011, FGVC-Aircraft and Stanford Cars. Fine-grained visual classification Attention-based data augmentation Coarse-to-fine refinement Li, Heng verfasserin aut Liang, Junlin verfasserin aut Su, Xiaofan verfasserin aut Zhai, Zhenzhen verfasserin aut Chai, Xinyu verfasserin aut Enthalten in Neurocomputing Amsterdam : Elsevier, 1989 501, Seite 359-369 Online-Ressource (DE-627)271176008 (DE-600)1479006-3 (DE-576)078412358 1872-8286 nnns volume:501 pages:359-369 GBV_USEFLAG_U SYSFLAG_U GBV_ELV SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2008 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 54.72 Künstliche Intelligenz AR 501 359-369 |
allfieldsSound |
10.1016/j.neucom.2022.06.041 doi (DE-627)ELV008182124 (ELSEVIER)S0925-2312(22)00760-3 DE-627 ger DE-627 rda eng 610 DE-600 54.72 bkl Chen, Jianpin verfasserin aut Attention-based cropping and erasing learning with coarse-to-fine refinement for fine-grained visual classification 2022 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Fine-grained visual classification is challenging due to similarities within classes and discriminative features located in subtle regions. Conventional methods focus on extracting features from the most discriminative parts, which may underperform when these parts are occluded or invisible. And the limited training data also leads to serious overfitting problem. In this paper, we propose an Attention-based Cropping and Erasing Network (ACEN) with a coarse-to-fine refinement strategy to address these problems. By convolving the feature maps from CNN, we obtain a set of attention maps which focus on discriminative object parts. Guided by the attention maps, we propose attention region cropping and erasing operations to augment training data. Moreover, the attention region cropping enhances local discriminative feature learning, and the attention region erasing promotes multi-attention learning. During inference phase, the coarse-to-fine refinement strategy is proposed to refine the model prediction. Extensive experiments demonstrate that our approach achieves state-of-the-art performance on challenging benchmarks, including CUB-200-2011, FGVC-Aircraft and Stanford Cars. Fine-grained visual classification Attention-based data augmentation Coarse-to-fine refinement Li, Heng verfasserin aut Liang, Junlin verfasserin aut Su, Xiaofan verfasserin aut Zhai, Zhenzhen verfasserin aut Chai, Xinyu verfasserin aut Enthalten in Neurocomputing Amsterdam : Elsevier, 1989 501, Seite 359-369 Online-Ressource (DE-627)271176008 (DE-600)1479006-3 (DE-576)078412358 1872-8286 nnns volume:501 pages:359-369 GBV_USEFLAG_U SYSFLAG_U GBV_ELV SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2008 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 54.72 Künstliche Intelligenz AR 501 359-369 |
language |
English |
source |
Enthalten in Neurocomputing 501, Seite 359-369 volume:501 pages:359-369 |
sourceStr |
Enthalten in Neurocomputing 501, Seite 359-369 volume:501 pages:359-369 |
format_phy_str_mv |
Article |
bklname |
Künstliche Intelligenz |
institution |
findex.gbv.de |
topic_facet |
Fine-grained visual classification Attention-based data augmentation Coarse-to-fine refinement |
dewey-raw |
610 |
isfreeaccess_bool |
false |
container_title |
Neurocomputing |
authorswithroles_txt_mv |
Chen, Jianpin @@aut@@ Li, Heng @@aut@@ Liang, Junlin @@aut@@ Su, Xiaofan @@aut@@ Zhai, Zhenzhen @@aut@@ Chai, Xinyu @@aut@@ |
publishDateDaySort_date |
2022-01-01T00:00:00Z |
hierarchy_top_id |
271176008 |
dewey-sort |
3610 |
id |
ELV008182124 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV008182124</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230524130537.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230508s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.neucom.2022.06.041</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV008182124</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0925-2312(22)00760-3</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">DE-600</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.72</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Chen, Jianpin</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Attention-based cropping and erasing learning with coarse-to-fine refinement for fine-grained visual classification</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Fine-grained visual classification is challenging due to similarities within classes and discriminative features located in subtle regions. Conventional methods focus on extracting features from the most discriminative parts, which may underperform when these parts are occluded or invisible. And the limited training data also leads to serious overfitting problem. In this paper, we propose an Attention-based Cropping and Erasing Network (ACEN) with a coarse-to-fine refinement strategy to address these problems. By convolving the feature maps from CNN, we obtain a set of attention maps which focus on discriminative object parts. Guided by the attention maps, we propose attention region cropping and erasing operations to augment training data. Moreover, the attention region cropping enhances local discriminative feature learning, and the attention region erasing promotes multi-attention learning. During inference phase, the coarse-to-fine refinement strategy is proposed to refine the model prediction. Extensive experiments demonstrate that our approach achieves state-of-the-art performance on challenging benchmarks, including CUB-200-2011, FGVC-Aircraft and Stanford Cars.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Fine-grained visual classification</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Attention-based data augmentation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Coarse-to-fine refinement</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Heng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Liang, Junlin</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Su, Xiaofan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhai, Zhenzhen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Chai, Xinyu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Neurocomputing</subfield><subfield code="d">Amsterdam : Elsevier, 1989</subfield><subfield code="g">501, Seite 359-369</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)271176008</subfield><subfield code="w">(DE-600)1479006-3</subfield><subfield code="w">(DE-576)078412358</subfield><subfield code="x">1872-8286</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:501</subfield><subfield code="g">pages:359-369</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.72</subfield><subfield code="j">Künstliche Intelligenz</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">501</subfield><subfield code="h">359-369</subfield></datafield></record></collection>
|
author |
Chen, Jianpin |
spellingShingle |
Chen, Jianpin ddc 610 bkl 54.72 misc Fine-grained visual classification misc Attention-based data augmentation misc Coarse-to-fine refinement Attention-based cropping and erasing learning with coarse-to-fine refinement for fine-grained visual classification |
authorStr |
Chen, Jianpin |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)271176008 |
format |
electronic Article |
dewey-ones |
610 - Medicine & health |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
1872-8286 |
topic_title |
610 DE-600 54.72 bkl Attention-based cropping and erasing learning with coarse-to-fine refinement for fine-grained visual classification Fine-grained visual classification Attention-based data augmentation Coarse-to-fine refinement |
topic |
ddc 610 bkl 54.72 misc Fine-grained visual classification misc Attention-based data augmentation misc Coarse-to-fine refinement |
topic_unstemmed |
ddc 610 bkl 54.72 misc Fine-grained visual classification misc Attention-based data augmentation misc Coarse-to-fine refinement |
topic_browse |
ddc 610 bkl 54.72 misc Fine-grained visual classification misc Attention-based data augmentation misc Coarse-to-fine refinement |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Neurocomputing |
hierarchy_parent_id |
271176008 |
dewey-tens |
610 - Medicine & health |
hierarchy_top_title |
Neurocomputing |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)271176008 (DE-600)1479006-3 (DE-576)078412358 |
title |
Attention-based cropping and erasing learning with coarse-to-fine refinement for fine-grained visual classification |
ctrlnum |
(DE-627)ELV008182124 (ELSEVIER)S0925-2312(22)00760-3 |
title_full |
Attention-based cropping and erasing learning with coarse-to-fine refinement for fine-grained visual classification |
author_sort |
Chen, Jianpin |
journal |
Neurocomputing |
journalStr |
Neurocomputing |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
600 - Technology |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
zzz |
container_start_page |
359 |
author_browse |
Chen, Jianpin Li, Heng Liang, Junlin Su, Xiaofan Zhai, Zhenzhen Chai, Xinyu |
container_volume |
501 |
class |
610 DE-600 54.72 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Chen, Jianpin |
doi_str_mv |
10.1016/j.neucom.2022.06.041 |
dewey-full |
610 |
author2-role |
verfasserin |
title_sort |
attention-based cropping and erasing learning with coarse-to-fine refinement for fine-grained visual classification |
title_auth |
Attention-based cropping and erasing learning with coarse-to-fine refinement for fine-grained visual classification |
abstract |
Fine-grained visual classification is challenging due to similarities within classes and discriminative features located in subtle regions. Conventional methods focus on extracting features from the most discriminative parts, which may underperform when these parts are occluded or invisible. And the limited training data also leads to serious overfitting problem. In this paper, we propose an Attention-based Cropping and Erasing Network (ACEN) with a coarse-to-fine refinement strategy to address these problems. By convolving the feature maps from CNN, we obtain a set of attention maps which focus on discriminative object parts. Guided by the attention maps, we propose attention region cropping and erasing operations to augment training data. Moreover, the attention region cropping enhances local discriminative feature learning, and the attention region erasing promotes multi-attention learning. During inference phase, the coarse-to-fine refinement strategy is proposed to refine the model prediction. Extensive experiments demonstrate that our approach achieves state-of-the-art performance on challenging benchmarks, including CUB-200-2011, FGVC-Aircraft and Stanford Cars. |
abstractGer |
Fine-grained visual classification is challenging due to similarities within classes and discriminative features located in subtle regions. Conventional methods focus on extracting features from the most discriminative parts, which may underperform when these parts are occluded or invisible. And the limited training data also leads to serious overfitting problem. In this paper, we propose an Attention-based Cropping and Erasing Network (ACEN) with a coarse-to-fine refinement strategy to address these problems. By convolving the feature maps from CNN, we obtain a set of attention maps which focus on discriminative object parts. Guided by the attention maps, we propose attention region cropping and erasing operations to augment training data. Moreover, the attention region cropping enhances local discriminative feature learning, and the attention region erasing promotes multi-attention learning. During inference phase, the coarse-to-fine refinement strategy is proposed to refine the model prediction. Extensive experiments demonstrate that our approach achieves state-of-the-art performance on challenging benchmarks, including CUB-200-2011, FGVC-Aircraft and Stanford Cars. |
abstract_unstemmed |
Fine-grained visual classification is challenging due to similarities within classes and discriminative features located in subtle regions. Conventional methods focus on extracting features from the most discriminative parts, which may underperform when these parts are occluded or invisible. And the limited training data also leads to serious overfitting problem. In this paper, we propose an Attention-based Cropping and Erasing Network (ACEN) with a coarse-to-fine refinement strategy to address these problems. By convolving the feature maps from CNN, we obtain a set of attention maps which focus on discriminative object parts. Guided by the attention maps, we propose attention region cropping and erasing operations to augment training data. Moreover, the attention region cropping enhances local discriminative feature learning, and the attention region erasing promotes multi-attention learning. During inference phase, the coarse-to-fine refinement strategy is proposed to refine the model prediction. Extensive experiments demonstrate that our approach achieves state-of-the-art performance on challenging benchmarks, including CUB-200-2011, FGVC-Aircraft and Stanford Cars. |
collection_details |
GBV_USEFLAG_U SYSFLAG_U GBV_ELV SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2008 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 |
title_short |
Attention-based cropping and erasing learning with coarse-to-fine refinement for fine-grained visual classification |
remote_bool |
true |
author2 |
Li, Heng Liang, Junlin Su, Xiaofan Zhai, Zhenzhen Chai, Xinyu |
author2Str |
Li, Heng Liang, Junlin Su, Xiaofan Zhai, Zhenzhen Chai, Xinyu |
ppnlink |
271176008 |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1016/j.neucom.2022.06.041 |
up_date |
2024-07-06T18:49:30.738Z |
_version_ |
1803856663795466240 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV008182124</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230524130537.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230508s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.neucom.2022.06.041</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV008182124</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0925-2312(22)00760-3</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">DE-600</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.72</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Chen, Jianpin</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Attention-based cropping and erasing learning with coarse-to-fine refinement for fine-grained visual classification</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Fine-grained visual classification is challenging due to similarities within classes and discriminative features located in subtle regions. Conventional methods focus on extracting features from the most discriminative parts, which may underperform when these parts are occluded or invisible. And the limited training data also leads to serious overfitting problem. In this paper, we propose an Attention-based Cropping and Erasing Network (ACEN) with a coarse-to-fine refinement strategy to address these problems. By convolving the feature maps from CNN, we obtain a set of attention maps which focus on discriminative object parts. Guided by the attention maps, we propose attention region cropping and erasing operations to augment training data. Moreover, the attention region cropping enhances local discriminative feature learning, and the attention region erasing promotes multi-attention learning. During inference phase, the coarse-to-fine refinement strategy is proposed to refine the model prediction. Extensive experiments demonstrate that our approach achieves state-of-the-art performance on challenging benchmarks, including CUB-200-2011, FGVC-Aircraft and Stanford Cars.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Fine-grained visual classification</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Attention-based data augmentation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Coarse-to-fine refinement</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Heng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Liang, Junlin</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Su, Xiaofan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhai, Zhenzhen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Chai, Xinyu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Neurocomputing</subfield><subfield code="d">Amsterdam : Elsevier, 1989</subfield><subfield code="g">501, Seite 359-369</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)271176008</subfield><subfield code="w">(DE-600)1479006-3</subfield><subfield code="w">(DE-576)078412358</subfield><subfield code="x">1872-8286</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:501</subfield><subfield code="g">pages:359-369</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.72</subfield><subfield code="j">Künstliche Intelligenz</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">501</subfield><subfield code="h">359-369</subfield></datafield></record></collection>
|
score |
7.400093 |