Scalable frame resolution for efficient continuous sign language recognition
In this paper, we explore the spatial redundancy in continuous sign language recognition (CSLR), aiming to improve its efficiency. Despite recent advances in accuracy in CSLR, state-of-the-art CSLR methods typically require large amounts of computations and memory occupation, which are not friendly...
Ausführliche Beschreibung
Autor*in: |
Hu, Lianyu [verfasserIn] Gao, Liqing [verfasserIn] Liu, Zekang [verfasserIn] Feng, Wei [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2023 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
Enthalten in: Pattern recognition - Amsterdam : Elsevier, 1968, 145 |
---|---|
Übergeordnetes Werk: |
volume:145 |
DOI / URN: |
10.1016/j.patcog.2023.109903 |
---|
Katalog-ID: |
ELV06498298X |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV06498298X | ||
003 | DE-627 | ||
005 | 20231205153554.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231006s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.patcog.2023.109903 |2 doi | |
035 | |a (DE-627)ELV06498298X | ||
035 | |a (ELSEVIER)S0031-3203(23)00601-5 | ||
040 | |a DE-627 |b ger |c DE-627 |e rda | ||
041 | |a eng | ||
082 | 0 | 4 | |a 000 |a 150 |q VZ |
084 | |a 54.74 |2 bkl | ||
100 | 1 | |a Hu, Lianyu |e verfasserin |0 (orcid)0000-0001-9974-4920 |4 aut | |
245 | 1 | 0 | |a Scalable frame resolution for efficient continuous sign language recognition |
264 | 1 | |c 2023 | |
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a In this paper, we explore the spatial redundancy in continuous sign language recognition (CSLR), aiming to improve its efficiency. Despite recent advances in accuracy in CSLR, state-of-the-art CSLR methods typically require large amounts of computations and memory occupation, which are not friendly towards fast inference under limited computation/memory budgets. Based on a simple observation that not all frames are equally important for CSLR, we propose AdaSize to handle this problem by modeling the frame resolution decision as an end-to-end learnable task to save unnecessary computations. Specifically, a lightweight 2D convolutional neural network (CNN) is first used to quickly browse input frames under a low resolution (e.g., 112 × 112). These extracted coarse and cheap features are sent into a recurrent policy network to dynamically determine the desired resolution for each frame. Once the optimal resolution for each frame is decided, frames with different resolutions are fed into the following backbones to extract representative features. Finally, these features pass through a sequence of temporal modules and a classifier to predict sentences. Extensive experiments on four large-scale datasets, including PHOENIX14, PHOENIX14-T, CSL-Daily and CSL, demonstrate the effectiveness of AdaSize. AdaSize could consistently achieve comparable accuracy with state-of-the-art CSLR methods, with only 0.38 × computations, 0.41 × memory usage and 1.25 × throughput. Comparisons with commonly-used lightweight backbones and other efficient methods verify the superiority of AdaSize under similar computational/memory budgets. We finally plot the frame resolution decisions for AdaSize, hoping to provide insightful analysis of the inherent spatial redundancy in videos. | ||
650 | 4 | |a Continuous sign language recognition | |
650 | 4 | |a Efficient inference | |
650 | 4 | |a Scalable frame resolution | |
650 | 4 | |a Adaptive inference | |
700 | 1 | |a Gao, Liqing |e verfasserin |0 (orcid)0000-0003-4518-2154 |4 aut | |
700 | 1 | |a Liu, Zekang |e verfasserin |4 aut | |
700 | 1 | |a Feng, Wei |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Pattern recognition |d Amsterdam : Elsevier, 1968 |g 145 |h Online-Ressource |w (DE-627)265784131 |w (DE-600)1466343-0 |w (DE-576)101177364 |7 nnns |
773 | 1 | 8 | |g volume:145 |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SYSFLAG_U | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_101 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_150 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_187 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2001 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2008 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2190 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
912 | |a GBV_ILN_4700 | ||
936 | b | k | |a 54.74 |j Maschinelles Sehen |q VZ |
951 | |a AR | ||
952 | |d 145 |
author_variant |
l h lh l g lg z l zl w f wf |
---|---|
matchkey_str |
hulianyugaoliqingliuzekangfengwei:2023----:clbermrsltofrfiincniuusgl |
hierarchy_sort_str |
2023 |
bklnumber |
54.74 |
publishDate |
2023 |
allfields |
10.1016/j.patcog.2023.109903 doi (DE-627)ELV06498298X (ELSEVIER)S0031-3203(23)00601-5 DE-627 ger DE-627 rda eng 000 150 VZ 54.74 bkl Hu, Lianyu verfasserin (orcid)0000-0001-9974-4920 aut Scalable frame resolution for efficient continuous sign language recognition 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier In this paper, we explore the spatial redundancy in continuous sign language recognition (CSLR), aiming to improve its efficiency. Despite recent advances in accuracy in CSLR, state-of-the-art CSLR methods typically require large amounts of computations and memory occupation, which are not friendly towards fast inference under limited computation/memory budgets. Based on a simple observation that not all frames are equally important for CSLR, we propose AdaSize to handle this problem by modeling the frame resolution decision as an end-to-end learnable task to save unnecessary computations. Specifically, a lightweight 2D convolutional neural network (CNN) is first used to quickly browse input frames under a low resolution (e.g., 112 × 112). These extracted coarse and cheap features are sent into a recurrent policy network to dynamically determine the desired resolution for each frame. Once the optimal resolution for each frame is decided, frames with different resolutions are fed into the following backbones to extract representative features. Finally, these features pass through a sequence of temporal modules and a classifier to predict sentences. Extensive experiments on four large-scale datasets, including PHOENIX14, PHOENIX14-T, CSL-Daily and CSL, demonstrate the effectiveness of AdaSize. AdaSize could consistently achieve comparable accuracy with state-of-the-art CSLR methods, with only 0.38 × computations, 0.41 × memory usage and 1.25 × throughput. Comparisons with commonly-used lightweight backbones and other efficient methods verify the superiority of AdaSize under similar computational/memory budgets. We finally plot the frame resolution decisions for AdaSize, hoping to provide insightful analysis of the inherent spatial redundancy in videos. Continuous sign language recognition Efficient inference Scalable frame resolution Adaptive inference Gao, Liqing verfasserin (orcid)0000-0003-4518-2154 aut Liu, Zekang verfasserin aut Feng, Wei verfasserin aut Enthalten in Pattern recognition Amsterdam : Elsevier, 1968 145 Online-Ressource (DE-627)265784131 (DE-600)1466343-0 (DE-576)101177364 nnns volume:145 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.74 Maschinelles Sehen VZ AR 145 |
spelling |
10.1016/j.patcog.2023.109903 doi (DE-627)ELV06498298X (ELSEVIER)S0031-3203(23)00601-5 DE-627 ger DE-627 rda eng 000 150 VZ 54.74 bkl Hu, Lianyu verfasserin (orcid)0000-0001-9974-4920 aut Scalable frame resolution for efficient continuous sign language recognition 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier In this paper, we explore the spatial redundancy in continuous sign language recognition (CSLR), aiming to improve its efficiency. Despite recent advances in accuracy in CSLR, state-of-the-art CSLR methods typically require large amounts of computations and memory occupation, which are not friendly towards fast inference under limited computation/memory budgets. Based on a simple observation that not all frames are equally important for CSLR, we propose AdaSize to handle this problem by modeling the frame resolution decision as an end-to-end learnable task to save unnecessary computations. Specifically, a lightweight 2D convolutional neural network (CNN) is first used to quickly browse input frames under a low resolution (e.g., 112 × 112). These extracted coarse and cheap features are sent into a recurrent policy network to dynamically determine the desired resolution for each frame. Once the optimal resolution for each frame is decided, frames with different resolutions are fed into the following backbones to extract representative features. Finally, these features pass through a sequence of temporal modules and a classifier to predict sentences. Extensive experiments on four large-scale datasets, including PHOENIX14, PHOENIX14-T, CSL-Daily and CSL, demonstrate the effectiveness of AdaSize. AdaSize could consistently achieve comparable accuracy with state-of-the-art CSLR methods, with only 0.38 × computations, 0.41 × memory usage and 1.25 × throughput. Comparisons with commonly-used lightweight backbones and other efficient methods verify the superiority of AdaSize under similar computational/memory budgets. We finally plot the frame resolution decisions for AdaSize, hoping to provide insightful analysis of the inherent spatial redundancy in videos. Continuous sign language recognition Efficient inference Scalable frame resolution Adaptive inference Gao, Liqing verfasserin (orcid)0000-0003-4518-2154 aut Liu, Zekang verfasserin aut Feng, Wei verfasserin aut Enthalten in Pattern recognition Amsterdam : Elsevier, 1968 145 Online-Ressource (DE-627)265784131 (DE-600)1466343-0 (DE-576)101177364 nnns volume:145 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.74 Maschinelles Sehen VZ AR 145 |
allfields_unstemmed |
10.1016/j.patcog.2023.109903 doi (DE-627)ELV06498298X (ELSEVIER)S0031-3203(23)00601-5 DE-627 ger DE-627 rda eng 000 150 VZ 54.74 bkl Hu, Lianyu verfasserin (orcid)0000-0001-9974-4920 aut Scalable frame resolution for efficient continuous sign language recognition 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier In this paper, we explore the spatial redundancy in continuous sign language recognition (CSLR), aiming to improve its efficiency. Despite recent advances in accuracy in CSLR, state-of-the-art CSLR methods typically require large amounts of computations and memory occupation, which are not friendly towards fast inference under limited computation/memory budgets. Based on a simple observation that not all frames are equally important for CSLR, we propose AdaSize to handle this problem by modeling the frame resolution decision as an end-to-end learnable task to save unnecessary computations. Specifically, a lightweight 2D convolutional neural network (CNN) is first used to quickly browse input frames under a low resolution (e.g., 112 × 112). These extracted coarse and cheap features are sent into a recurrent policy network to dynamically determine the desired resolution for each frame. Once the optimal resolution for each frame is decided, frames with different resolutions are fed into the following backbones to extract representative features. Finally, these features pass through a sequence of temporal modules and a classifier to predict sentences. Extensive experiments on four large-scale datasets, including PHOENIX14, PHOENIX14-T, CSL-Daily and CSL, demonstrate the effectiveness of AdaSize. AdaSize could consistently achieve comparable accuracy with state-of-the-art CSLR methods, with only 0.38 × computations, 0.41 × memory usage and 1.25 × throughput. Comparisons with commonly-used lightweight backbones and other efficient methods verify the superiority of AdaSize under similar computational/memory budgets. We finally plot the frame resolution decisions for AdaSize, hoping to provide insightful analysis of the inherent spatial redundancy in videos. Continuous sign language recognition Efficient inference Scalable frame resolution Adaptive inference Gao, Liqing verfasserin (orcid)0000-0003-4518-2154 aut Liu, Zekang verfasserin aut Feng, Wei verfasserin aut Enthalten in Pattern recognition Amsterdam : Elsevier, 1968 145 Online-Ressource (DE-627)265784131 (DE-600)1466343-0 (DE-576)101177364 nnns volume:145 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.74 Maschinelles Sehen VZ AR 145 |
allfieldsGer |
10.1016/j.patcog.2023.109903 doi (DE-627)ELV06498298X (ELSEVIER)S0031-3203(23)00601-5 DE-627 ger DE-627 rda eng 000 150 VZ 54.74 bkl Hu, Lianyu verfasserin (orcid)0000-0001-9974-4920 aut Scalable frame resolution for efficient continuous sign language recognition 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier In this paper, we explore the spatial redundancy in continuous sign language recognition (CSLR), aiming to improve its efficiency. Despite recent advances in accuracy in CSLR, state-of-the-art CSLR methods typically require large amounts of computations and memory occupation, which are not friendly towards fast inference under limited computation/memory budgets. Based on a simple observation that not all frames are equally important for CSLR, we propose AdaSize to handle this problem by modeling the frame resolution decision as an end-to-end learnable task to save unnecessary computations. Specifically, a lightweight 2D convolutional neural network (CNN) is first used to quickly browse input frames under a low resolution (e.g., 112 × 112). These extracted coarse and cheap features are sent into a recurrent policy network to dynamically determine the desired resolution for each frame. Once the optimal resolution for each frame is decided, frames with different resolutions are fed into the following backbones to extract representative features. Finally, these features pass through a sequence of temporal modules and a classifier to predict sentences. Extensive experiments on four large-scale datasets, including PHOENIX14, PHOENIX14-T, CSL-Daily and CSL, demonstrate the effectiveness of AdaSize. AdaSize could consistently achieve comparable accuracy with state-of-the-art CSLR methods, with only 0.38 × computations, 0.41 × memory usage and 1.25 × throughput. Comparisons with commonly-used lightweight backbones and other efficient methods verify the superiority of AdaSize under similar computational/memory budgets. We finally plot the frame resolution decisions for AdaSize, hoping to provide insightful analysis of the inherent spatial redundancy in videos. Continuous sign language recognition Efficient inference Scalable frame resolution Adaptive inference Gao, Liqing verfasserin (orcid)0000-0003-4518-2154 aut Liu, Zekang verfasserin aut Feng, Wei verfasserin aut Enthalten in Pattern recognition Amsterdam : Elsevier, 1968 145 Online-Ressource (DE-627)265784131 (DE-600)1466343-0 (DE-576)101177364 nnns volume:145 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.74 Maschinelles Sehen VZ AR 145 |
allfieldsSound |
10.1016/j.patcog.2023.109903 doi (DE-627)ELV06498298X (ELSEVIER)S0031-3203(23)00601-5 DE-627 ger DE-627 rda eng 000 150 VZ 54.74 bkl Hu, Lianyu verfasserin (orcid)0000-0001-9974-4920 aut Scalable frame resolution for efficient continuous sign language recognition 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier In this paper, we explore the spatial redundancy in continuous sign language recognition (CSLR), aiming to improve its efficiency. Despite recent advances in accuracy in CSLR, state-of-the-art CSLR methods typically require large amounts of computations and memory occupation, which are not friendly towards fast inference under limited computation/memory budgets. Based on a simple observation that not all frames are equally important for CSLR, we propose AdaSize to handle this problem by modeling the frame resolution decision as an end-to-end learnable task to save unnecessary computations. Specifically, a lightweight 2D convolutional neural network (CNN) is first used to quickly browse input frames under a low resolution (e.g., 112 × 112). These extracted coarse and cheap features are sent into a recurrent policy network to dynamically determine the desired resolution for each frame. Once the optimal resolution for each frame is decided, frames with different resolutions are fed into the following backbones to extract representative features. Finally, these features pass through a sequence of temporal modules and a classifier to predict sentences. Extensive experiments on four large-scale datasets, including PHOENIX14, PHOENIX14-T, CSL-Daily and CSL, demonstrate the effectiveness of AdaSize. AdaSize could consistently achieve comparable accuracy with state-of-the-art CSLR methods, with only 0.38 × computations, 0.41 × memory usage and 1.25 × throughput. Comparisons with commonly-used lightweight backbones and other efficient methods verify the superiority of AdaSize under similar computational/memory budgets. We finally plot the frame resolution decisions for AdaSize, hoping to provide insightful analysis of the inherent spatial redundancy in videos. Continuous sign language recognition Efficient inference Scalable frame resolution Adaptive inference Gao, Liqing verfasserin (orcid)0000-0003-4518-2154 aut Liu, Zekang verfasserin aut Feng, Wei verfasserin aut Enthalten in Pattern recognition Amsterdam : Elsevier, 1968 145 Online-Ressource (DE-627)265784131 (DE-600)1466343-0 (DE-576)101177364 nnns volume:145 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.74 Maschinelles Sehen VZ AR 145 |
language |
English |
source |
Enthalten in Pattern recognition 145 volume:145 |
sourceStr |
Enthalten in Pattern recognition 145 volume:145 |
format_phy_str_mv |
Article |
bklname |
Maschinelles Sehen |
institution |
findex.gbv.de |
topic_facet |
Continuous sign language recognition Efficient inference Scalable frame resolution Adaptive inference |
dewey-raw |
000 |
isfreeaccess_bool |
false |
container_title |
Pattern recognition |
authorswithroles_txt_mv |
Hu, Lianyu @@aut@@ Gao, Liqing @@aut@@ Liu, Zekang @@aut@@ Feng, Wei @@aut@@ |
publishDateDaySort_date |
2023-01-01T00:00:00Z |
hierarchy_top_id |
265784131 |
dewey-sort |
0 |
id |
ELV06498298X |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV06498298X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20231205153554.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">231006s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.patcog.2023.109903</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV06498298X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0031-3203(23)00601-5</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">000</subfield><subfield code="a">150</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.74</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Hu, Lianyu</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0001-9974-4920</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Scalable frame resolution for efficient continuous sign language recognition</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">In this paper, we explore the spatial redundancy in continuous sign language recognition (CSLR), aiming to improve its efficiency. Despite recent advances in accuracy in CSLR, state-of-the-art CSLR methods typically require large amounts of computations and memory occupation, which are not friendly towards fast inference under limited computation/memory budgets. Based on a simple observation that not all frames are equally important for CSLR, we propose AdaSize to handle this problem by modeling the frame resolution decision as an end-to-end learnable task to save unnecessary computations. Specifically, a lightweight 2D convolutional neural network (CNN) is first used to quickly browse input frames under a low resolution (e.g., 112 × 112). These extracted coarse and cheap features are sent into a recurrent policy network to dynamically determine the desired resolution for each frame. Once the optimal resolution for each frame is decided, frames with different resolutions are fed into the following backbones to extract representative features. Finally, these features pass through a sequence of temporal modules and a classifier to predict sentences. Extensive experiments on four large-scale datasets, including PHOENIX14, PHOENIX14-T, CSL-Daily and CSL, demonstrate the effectiveness of AdaSize. AdaSize could consistently achieve comparable accuracy with state-of-the-art CSLR methods, with only 0.38 × computations, 0.41 × memory usage and 1.25 × throughput. Comparisons with commonly-used lightweight backbones and other efficient methods verify the superiority of AdaSize under similar computational/memory budgets. We finally plot the frame resolution decisions for AdaSize, hoping to provide insightful analysis of the inherent spatial redundancy in videos.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Continuous sign language recognition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Efficient inference</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Scalable frame resolution</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Adaptive inference</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Gao, Liqing</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0003-4518-2154</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Liu, Zekang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Feng, Wei</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Pattern recognition</subfield><subfield code="d">Amsterdam : Elsevier, 1968</subfield><subfield code="g">145</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)265784131</subfield><subfield code="w">(DE-600)1466343-0</subfield><subfield code="w">(DE-576)101177364</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:145</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.74</subfield><subfield code="j">Maschinelles Sehen</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">145</subfield></datafield></record></collection>
|
author |
Hu, Lianyu |
spellingShingle |
Hu, Lianyu ddc 000 bkl 54.74 misc Continuous sign language recognition misc Efficient inference misc Scalable frame resolution misc Adaptive inference Scalable frame resolution for efficient continuous sign language recognition |
authorStr |
Hu, Lianyu |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)265784131 |
format |
electronic Article |
dewey-ones |
000 - Computer science, information & general works 150 - Psychology |
delete_txt_mv |
keep |
author_role |
aut aut aut aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
topic_title |
000 150 VZ 54.74 bkl Scalable frame resolution for efficient continuous sign language recognition Continuous sign language recognition Efficient inference Scalable frame resolution Adaptive inference |
topic |
ddc 000 bkl 54.74 misc Continuous sign language recognition misc Efficient inference misc Scalable frame resolution misc Adaptive inference |
topic_unstemmed |
ddc 000 bkl 54.74 misc Continuous sign language recognition misc Efficient inference misc Scalable frame resolution misc Adaptive inference |
topic_browse |
ddc 000 bkl 54.74 misc Continuous sign language recognition misc Efficient inference misc Scalable frame resolution misc Adaptive inference |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Pattern recognition |
hierarchy_parent_id |
265784131 |
dewey-tens |
000 - Computer science, knowledge & systems 150 - Psychology |
hierarchy_top_title |
Pattern recognition |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)265784131 (DE-600)1466343-0 (DE-576)101177364 |
title |
Scalable frame resolution for efficient continuous sign language recognition |
ctrlnum |
(DE-627)ELV06498298X (ELSEVIER)S0031-3203(23)00601-5 |
title_full |
Scalable frame resolution for efficient continuous sign language recognition |
author_sort |
Hu, Lianyu |
journal |
Pattern recognition |
journalStr |
Pattern recognition |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works 100 - Philosophy & psychology |
recordtype |
marc |
publishDateSort |
2023 |
contenttype_str_mv |
zzz |
author_browse |
Hu, Lianyu Gao, Liqing Liu, Zekang Feng, Wei |
container_volume |
145 |
class |
000 150 VZ 54.74 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Hu, Lianyu |
doi_str_mv |
10.1016/j.patcog.2023.109903 |
normlink |
(ORCID)0000-0001-9974-4920 (ORCID)0000-0003-4518-2154 |
normlink_prefix_str_mv |
(orcid)0000-0001-9974-4920 (orcid)0000-0003-4518-2154 |
dewey-full |
000 150 |
author2-role |
verfasserin |
title_sort |
scalable frame resolution for efficient continuous sign language recognition |
title_auth |
Scalable frame resolution for efficient continuous sign language recognition |
abstract |
In this paper, we explore the spatial redundancy in continuous sign language recognition (CSLR), aiming to improve its efficiency. Despite recent advances in accuracy in CSLR, state-of-the-art CSLR methods typically require large amounts of computations and memory occupation, which are not friendly towards fast inference under limited computation/memory budgets. Based on a simple observation that not all frames are equally important for CSLR, we propose AdaSize to handle this problem by modeling the frame resolution decision as an end-to-end learnable task to save unnecessary computations. Specifically, a lightweight 2D convolutional neural network (CNN) is first used to quickly browse input frames under a low resolution (e.g., 112 × 112). These extracted coarse and cheap features are sent into a recurrent policy network to dynamically determine the desired resolution for each frame. Once the optimal resolution for each frame is decided, frames with different resolutions are fed into the following backbones to extract representative features. Finally, these features pass through a sequence of temporal modules and a classifier to predict sentences. Extensive experiments on four large-scale datasets, including PHOENIX14, PHOENIX14-T, CSL-Daily and CSL, demonstrate the effectiveness of AdaSize. AdaSize could consistently achieve comparable accuracy with state-of-the-art CSLR methods, with only 0.38 × computations, 0.41 × memory usage and 1.25 × throughput. Comparisons with commonly-used lightweight backbones and other efficient methods verify the superiority of AdaSize under similar computational/memory budgets. We finally plot the frame resolution decisions for AdaSize, hoping to provide insightful analysis of the inherent spatial redundancy in videos. |
abstractGer |
In this paper, we explore the spatial redundancy in continuous sign language recognition (CSLR), aiming to improve its efficiency. Despite recent advances in accuracy in CSLR, state-of-the-art CSLR methods typically require large amounts of computations and memory occupation, which are not friendly towards fast inference under limited computation/memory budgets. Based on a simple observation that not all frames are equally important for CSLR, we propose AdaSize to handle this problem by modeling the frame resolution decision as an end-to-end learnable task to save unnecessary computations. Specifically, a lightweight 2D convolutional neural network (CNN) is first used to quickly browse input frames under a low resolution (e.g., 112 × 112). These extracted coarse and cheap features are sent into a recurrent policy network to dynamically determine the desired resolution for each frame. Once the optimal resolution for each frame is decided, frames with different resolutions are fed into the following backbones to extract representative features. Finally, these features pass through a sequence of temporal modules and a classifier to predict sentences. Extensive experiments on four large-scale datasets, including PHOENIX14, PHOENIX14-T, CSL-Daily and CSL, demonstrate the effectiveness of AdaSize. AdaSize could consistently achieve comparable accuracy with state-of-the-art CSLR methods, with only 0.38 × computations, 0.41 × memory usage and 1.25 × throughput. Comparisons with commonly-used lightweight backbones and other efficient methods verify the superiority of AdaSize under similar computational/memory budgets. We finally plot the frame resolution decisions for AdaSize, hoping to provide insightful analysis of the inherent spatial redundancy in videos. |
abstract_unstemmed |
In this paper, we explore the spatial redundancy in continuous sign language recognition (CSLR), aiming to improve its efficiency. Despite recent advances in accuracy in CSLR, state-of-the-art CSLR methods typically require large amounts of computations and memory occupation, which are not friendly towards fast inference under limited computation/memory budgets. Based on a simple observation that not all frames are equally important for CSLR, we propose AdaSize to handle this problem by modeling the frame resolution decision as an end-to-end learnable task to save unnecessary computations. Specifically, a lightweight 2D convolutional neural network (CNN) is first used to quickly browse input frames under a low resolution (e.g., 112 × 112). These extracted coarse and cheap features are sent into a recurrent policy network to dynamically determine the desired resolution for each frame. Once the optimal resolution for each frame is decided, frames with different resolutions are fed into the following backbones to extract representative features. Finally, these features pass through a sequence of temporal modules and a classifier to predict sentences. Extensive experiments on four large-scale datasets, including PHOENIX14, PHOENIX14-T, CSL-Daily and CSL, demonstrate the effectiveness of AdaSize. AdaSize could consistently achieve comparable accuracy with state-of-the-art CSLR methods, with only 0.38 × computations, 0.41 × memory usage and 1.25 × throughput. Comparisons with commonly-used lightweight backbones and other efficient methods verify the superiority of AdaSize under similar computational/memory budgets. We finally plot the frame resolution decisions for AdaSize, hoping to provide insightful analysis of the inherent spatial redundancy in videos. |
collection_details |
GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 |
title_short |
Scalable frame resolution for efficient continuous sign language recognition |
remote_bool |
true |
author2 |
Gao, Liqing Liu, Zekang Feng, Wei |
author2Str |
Gao, Liqing Liu, Zekang Feng, Wei |
ppnlink |
265784131 |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1016/j.patcog.2023.109903 |
up_date |
2024-07-06T21:24:49.277Z |
_version_ |
1803866434991816704 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV06498298X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20231205153554.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">231006s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.patcog.2023.109903</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV06498298X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0031-3203(23)00601-5</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">000</subfield><subfield code="a">150</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.74</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Hu, Lianyu</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0001-9974-4920</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Scalable frame resolution for efficient continuous sign language recognition</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">In this paper, we explore the spatial redundancy in continuous sign language recognition (CSLR), aiming to improve its efficiency. Despite recent advances in accuracy in CSLR, state-of-the-art CSLR methods typically require large amounts of computations and memory occupation, which are not friendly towards fast inference under limited computation/memory budgets. Based on a simple observation that not all frames are equally important for CSLR, we propose AdaSize to handle this problem by modeling the frame resolution decision as an end-to-end learnable task to save unnecessary computations. Specifically, a lightweight 2D convolutional neural network (CNN) is first used to quickly browse input frames under a low resolution (e.g., 112 × 112). These extracted coarse and cheap features are sent into a recurrent policy network to dynamically determine the desired resolution for each frame. Once the optimal resolution for each frame is decided, frames with different resolutions are fed into the following backbones to extract representative features. Finally, these features pass through a sequence of temporal modules and a classifier to predict sentences. Extensive experiments on four large-scale datasets, including PHOENIX14, PHOENIX14-T, CSL-Daily and CSL, demonstrate the effectiveness of AdaSize. AdaSize could consistently achieve comparable accuracy with state-of-the-art CSLR methods, with only 0.38 × computations, 0.41 × memory usage and 1.25 × throughput. Comparisons with commonly-used lightweight backbones and other efficient methods verify the superiority of AdaSize under similar computational/memory budgets. We finally plot the frame resolution decisions for AdaSize, hoping to provide insightful analysis of the inherent spatial redundancy in videos.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Continuous sign language recognition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Efficient inference</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Scalable frame resolution</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Adaptive inference</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Gao, Liqing</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0003-4518-2154</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Liu, Zekang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Feng, Wei</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Pattern recognition</subfield><subfield code="d">Amsterdam : Elsevier, 1968</subfield><subfield code="g">145</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)265784131</subfield><subfield code="w">(DE-600)1466343-0</subfield><subfield code="w">(DE-576)101177364</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:145</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.74</subfield><subfield code="j">Maschinelles Sehen</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">145</subfield></datafield></record></collection>
|
score |
7.400216 |