RelNet-MAM: Relation Network with Multilevel Attention Mechanism for Image Captioning
Text present in an image contains rich semantic information which is crucial for the understanding of an image. For example, a signboard having the text “deep water” conveys the danger involved in the image. The current image captioning models do not effectively utilize this useful semantic informat...
Ausführliche Beschreibung
Autor*in: |
Srivastava, Swati [verfasserIn] Sharma, Himanshu [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2023 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
Enthalten in: Microprocessors and microsystems - Amsterdam [u.a.] : Elsevier, 1979, 102 |
---|---|
Übergeordnetes Werk: |
volume:102 |
DOI / URN: |
10.1016/j.micpro.2023.104931 |
---|
Katalog-ID: |
ELV065101014 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | ELV065101014 | ||
003 | DE-627 | ||
005 | 20231013093036.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231013s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.micpro.2023.104931 |2 doi | |
035 | |a (DE-627)ELV065101014 | ||
035 | |a (ELSEVIER)S0141-9331(23)00175-8 | ||
040 | |a DE-627 |b ger |c DE-627 |e rda | ||
041 | |a eng | ||
082 | 0 | 4 | |a 510 |q VZ |
084 | |a 53.55 |2 bkl | ||
084 | |a 54.31 |2 bkl | ||
100 | 1 | |a Srivastava, Swati |e verfasserin |4 aut | |
245 | 1 | 0 | |a RelNet-MAM: Relation Network with Multilevel Attention Mechanism for Image Captioning |
264 | 1 | |c 2023 | |
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Text present in an image contains rich semantic information which is crucial for the understanding of an image. For example, a signboard having the text “deep water” conveys the danger involved in the image. The current image captioning models do not effectively utilize this useful semantic information due to their limited representation capabilities of scene-text tokens. Our work presents a novel image captioning model called RelNet-MAM, which utilizes a multilevel attention mechanism and relation network. To improve the appearance feature representation, RelNet-MAM uses multilevel attention which consists of spatial attention, channel-wise attention, and semantic attention. For representing the scene-text token effectively, RelNet-MAM uses appearance, FastText, location, and PHOC features for each token. Further, the proposed RelNet-MAM uses the relation network to establish the relationships between the objects and scene-text tokens. Finally, the transformer model together with dynamic pointer networks is used as a decoder in the caption generation process. The proposed RelNet-MAM model outperforms the state-of-the-art models on TextCaps, Flickr30k, and MS COCO datasets. TextCaps requires models to read and reason about the texts in an image for caption generation. MSCOCO and Flickr30k contain diverse images; persons, animals, automobiles, indoor and outdoor scenes. Remarkably, the proposed RelNet-MAM model outperforms the current best model by 2.3% on B-4, 1.8% on METEOR, 2.2% on ROUGE-L, 2.0% on CIDEr-D and 3.0% on SPICE metric scores on TextCaps dataset. | ||
650 | 4 | |a Attention | |
650 | 4 | |a Spatial | |
650 | 4 | |a Image captioning | |
650 | 4 | |a Multilevel | |
650 | 4 | |a Semantic | |
700 | 1 | |a Sharma, Himanshu |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Microprocessors and microsystems |d Amsterdam [u.a.] : Elsevier, 1979 |g 102 |h Online-Ressource |w (DE-627)271175982 |w (DE-600)1479003-8 |w (DE-576)251938107 |7 nnns |
773 | 1 | 8 | |g volume:102 |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SYSFLAG_U | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_150 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_187 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2001 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2008 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2190 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
912 | |a GBV_ILN_4700 | ||
936 | b | k | |a 53.55 |j Mikroelektronik |q VZ |
936 | b | k | |a 54.31 |j Rechnerarchitektur |q VZ |
951 | |a AR | ||
952 | |d 102 |
author_variant |
s s ss h s hs |
---|---|
matchkey_str |
srivastavaswatisharmahimanshu:2023----:entarltontokihutlvltetomcai |
hierarchy_sort_str |
2023 |
bklnumber |
53.55 54.31 |
publishDate |
2023 |
allfields |
10.1016/j.micpro.2023.104931 doi (DE-627)ELV065101014 (ELSEVIER)S0141-9331(23)00175-8 DE-627 ger DE-627 rda eng 510 VZ 53.55 bkl 54.31 bkl Srivastava, Swati verfasserin aut RelNet-MAM: Relation Network with Multilevel Attention Mechanism for Image Captioning 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Text present in an image contains rich semantic information which is crucial for the understanding of an image. For example, a signboard having the text “deep water” conveys the danger involved in the image. The current image captioning models do not effectively utilize this useful semantic information due to their limited representation capabilities of scene-text tokens. Our work presents a novel image captioning model called RelNet-MAM, which utilizes a multilevel attention mechanism and relation network. To improve the appearance feature representation, RelNet-MAM uses multilevel attention which consists of spatial attention, channel-wise attention, and semantic attention. For representing the scene-text token effectively, RelNet-MAM uses appearance, FastText, location, and PHOC features for each token. Further, the proposed RelNet-MAM uses the relation network to establish the relationships between the objects and scene-text tokens. Finally, the transformer model together with dynamic pointer networks is used as a decoder in the caption generation process. The proposed RelNet-MAM model outperforms the state-of-the-art models on TextCaps, Flickr30k, and MS COCO datasets. TextCaps requires models to read and reason about the texts in an image for caption generation. MSCOCO and Flickr30k contain diverse images; persons, animals, automobiles, indoor and outdoor scenes. Remarkably, the proposed RelNet-MAM model outperforms the current best model by 2.3% on B-4, 1.8% on METEOR, 2.2% on ROUGE-L, 2.0% on CIDEr-D and 3.0% on SPICE metric scores on TextCaps dataset. Attention Spatial Image captioning Multilevel Semantic Sharma, Himanshu verfasserin aut Enthalten in Microprocessors and microsystems Amsterdam [u.a.] : Elsevier, 1979 102 Online-Ressource (DE-627)271175982 (DE-600)1479003-8 (DE-576)251938107 nnns volume:102 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 53.55 Mikroelektronik VZ 54.31 Rechnerarchitektur VZ AR 102 |
spelling |
10.1016/j.micpro.2023.104931 doi (DE-627)ELV065101014 (ELSEVIER)S0141-9331(23)00175-8 DE-627 ger DE-627 rda eng 510 VZ 53.55 bkl 54.31 bkl Srivastava, Swati verfasserin aut RelNet-MAM: Relation Network with Multilevel Attention Mechanism for Image Captioning 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Text present in an image contains rich semantic information which is crucial for the understanding of an image. For example, a signboard having the text “deep water” conveys the danger involved in the image. The current image captioning models do not effectively utilize this useful semantic information due to their limited representation capabilities of scene-text tokens. Our work presents a novel image captioning model called RelNet-MAM, which utilizes a multilevel attention mechanism and relation network. To improve the appearance feature representation, RelNet-MAM uses multilevel attention which consists of spatial attention, channel-wise attention, and semantic attention. For representing the scene-text token effectively, RelNet-MAM uses appearance, FastText, location, and PHOC features for each token. Further, the proposed RelNet-MAM uses the relation network to establish the relationships between the objects and scene-text tokens. Finally, the transformer model together with dynamic pointer networks is used as a decoder in the caption generation process. The proposed RelNet-MAM model outperforms the state-of-the-art models on TextCaps, Flickr30k, and MS COCO datasets. TextCaps requires models to read and reason about the texts in an image for caption generation. MSCOCO and Flickr30k contain diverse images; persons, animals, automobiles, indoor and outdoor scenes. Remarkably, the proposed RelNet-MAM model outperforms the current best model by 2.3% on B-4, 1.8% on METEOR, 2.2% on ROUGE-L, 2.0% on CIDEr-D and 3.0% on SPICE metric scores on TextCaps dataset. Attention Spatial Image captioning Multilevel Semantic Sharma, Himanshu verfasserin aut Enthalten in Microprocessors and microsystems Amsterdam [u.a.] : Elsevier, 1979 102 Online-Ressource (DE-627)271175982 (DE-600)1479003-8 (DE-576)251938107 nnns volume:102 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 53.55 Mikroelektronik VZ 54.31 Rechnerarchitektur VZ AR 102 |
allfields_unstemmed |
10.1016/j.micpro.2023.104931 doi (DE-627)ELV065101014 (ELSEVIER)S0141-9331(23)00175-8 DE-627 ger DE-627 rda eng 510 VZ 53.55 bkl 54.31 bkl Srivastava, Swati verfasserin aut RelNet-MAM: Relation Network with Multilevel Attention Mechanism for Image Captioning 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Text present in an image contains rich semantic information which is crucial for the understanding of an image. For example, a signboard having the text “deep water” conveys the danger involved in the image. The current image captioning models do not effectively utilize this useful semantic information due to their limited representation capabilities of scene-text tokens. Our work presents a novel image captioning model called RelNet-MAM, which utilizes a multilevel attention mechanism and relation network. To improve the appearance feature representation, RelNet-MAM uses multilevel attention which consists of spatial attention, channel-wise attention, and semantic attention. For representing the scene-text token effectively, RelNet-MAM uses appearance, FastText, location, and PHOC features for each token. Further, the proposed RelNet-MAM uses the relation network to establish the relationships between the objects and scene-text tokens. Finally, the transformer model together with dynamic pointer networks is used as a decoder in the caption generation process. The proposed RelNet-MAM model outperforms the state-of-the-art models on TextCaps, Flickr30k, and MS COCO datasets. TextCaps requires models to read and reason about the texts in an image for caption generation. MSCOCO and Flickr30k contain diverse images; persons, animals, automobiles, indoor and outdoor scenes. Remarkably, the proposed RelNet-MAM model outperforms the current best model by 2.3% on B-4, 1.8% on METEOR, 2.2% on ROUGE-L, 2.0% on CIDEr-D and 3.0% on SPICE metric scores on TextCaps dataset. Attention Spatial Image captioning Multilevel Semantic Sharma, Himanshu verfasserin aut Enthalten in Microprocessors and microsystems Amsterdam [u.a.] : Elsevier, 1979 102 Online-Ressource (DE-627)271175982 (DE-600)1479003-8 (DE-576)251938107 nnns volume:102 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 53.55 Mikroelektronik VZ 54.31 Rechnerarchitektur VZ AR 102 |
allfieldsGer |
10.1016/j.micpro.2023.104931 doi (DE-627)ELV065101014 (ELSEVIER)S0141-9331(23)00175-8 DE-627 ger DE-627 rda eng 510 VZ 53.55 bkl 54.31 bkl Srivastava, Swati verfasserin aut RelNet-MAM: Relation Network with Multilevel Attention Mechanism for Image Captioning 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Text present in an image contains rich semantic information which is crucial for the understanding of an image. For example, a signboard having the text “deep water” conveys the danger involved in the image. The current image captioning models do not effectively utilize this useful semantic information due to their limited representation capabilities of scene-text tokens. Our work presents a novel image captioning model called RelNet-MAM, which utilizes a multilevel attention mechanism and relation network. To improve the appearance feature representation, RelNet-MAM uses multilevel attention which consists of spatial attention, channel-wise attention, and semantic attention. For representing the scene-text token effectively, RelNet-MAM uses appearance, FastText, location, and PHOC features for each token. Further, the proposed RelNet-MAM uses the relation network to establish the relationships between the objects and scene-text tokens. Finally, the transformer model together with dynamic pointer networks is used as a decoder in the caption generation process. The proposed RelNet-MAM model outperforms the state-of-the-art models on TextCaps, Flickr30k, and MS COCO datasets. TextCaps requires models to read and reason about the texts in an image for caption generation. MSCOCO and Flickr30k contain diverse images; persons, animals, automobiles, indoor and outdoor scenes. Remarkably, the proposed RelNet-MAM model outperforms the current best model by 2.3% on B-4, 1.8% on METEOR, 2.2% on ROUGE-L, 2.0% on CIDEr-D and 3.0% on SPICE metric scores on TextCaps dataset. Attention Spatial Image captioning Multilevel Semantic Sharma, Himanshu verfasserin aut Enthalten in Microprocessors and microsystems Amsterdam [u.a.] : Elsevier, 1979 102 Online-Ressource (DE-627)271175982 (DE-600)1479003-8 (DE-576)251938107 nnns volume:102 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 53.55 Mikroelektronik VZ 54.31 Rechnerarchitektur VZ AR 102 |
allfieldsSound |
10.1016/j.micpro.2023.104931 doi (DE-627)ELV065101014 (ELSEVIER)S0141-9331(23)00175-8 DE-627 ger DE-627 rda eng 510 VZ 53.55 bkl 54.31 bkl Srivastava, Swati verfasserin aut RelNet-MAM: Relation Network with Multilevel Attention Mechanism for Image Captioning 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Text present in an image contains rich semantic information which is crucial for the understanding of an image. For example, a signboard having the text “deep water” conveys the danger involved in the image. The current image captioning models do not effectively utilize this useful semantic information due to their limited representation capabilities of scene-text tokens. Our work presents a novel image captioning model called RelNet-MAM, which utilizes a multilevel attention mechanism and relation network. To improve the appearance feature representation, RelNet-MAM uses multilevel attention which consists of spatial attention, channel-wise attention, and semantic attention. For representing the scene-text token effectively, RelNet-MAM uses appearance, FastText, location, and PHOC features for each token. Further, the proposed RelNet-MAM uses the relation network to establish the relationships between the objects and scene-text tokens. Finally, the transformer model together with dynamic pointer networks is used as a decoder in the caption generation process. The proposed RelNet-MAM model outperforms the state-of-the-art models on TextCaps, Flickr30k, and MS COCO datasets. TextCaps requires models to read and reason about the texts in an image for caption generation. MSCOCO and Flickr30k contain diverse images; persons, animals, automobiles, indoor and outdoor scenes. Remarkably, the proposed RelNet-MAM model outperforms the current best model by 2.3% on B-4, 1.8% on METEOR, 2.2% on ROUGE-L, 2.0% on CIDEr-D and 3.0% on SPICE metric scores on TextCaps dataset. Attention Spatial Image captioning Multilevel Semantic Sharma, Himanshu verfasserin aut Enthalten in Microprocessors and microsystems Amsterdam [u.a.] : Elsevier, 1979 102 Online-Ressource (DE-627)271175982 (DE-600)1479003-8 (DE-576)251938107 nnns volume:102 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 53.55 Mikroelektronik VZ 54.31 Rechnerarchitektur VZ AR 102 |
language |
English |
source |
Enthalten in Microprocessors and microsystems 102 volume:102 |
sourceStr |
Enthalten in Microprocessors and microsystems 102 volume:102 |
format_phy_str_mv |
Article |
bklname |
Mikroelektronik Rechnerarchitektur |
institution |
findex.gbv.de |
topic_facet |
Attention Spatial Image captioning Multilevel Semantic |
dewey-raw |
510 |
isfreeaccess_bool |
false |
container_title |
Microprocessors and microsystems |
authorswithroles_txt_mv |
Srivastava, Swati @@aut@@ Sharma, Himanshu @@aut@@ |
publishDateDaySort_date |
2023-01-01T00:00:00Z |
hierarchy_top_id |
271175982 |
dewey-sort |
3510 |
id |
ELV065101014 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">ELV065101014</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20231013093036.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">231013s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.micpro.2023.104931</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV065101014</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0141-9331(23)00175-8</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">510</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">53.55</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.31</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Srivastava, Swati</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">RelNet-MAM: Relation Network with Multilevel Attention Mechanism for Image Captioning</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Text present in an image contains rich semantic information which is crucial for the understanding of an image. For example, a signboard having the text “deep water” conveys the danger involved in the image. The current image captioning models do not effectively utilize this useful semantic information due to their limited representation capabilities of scene-text tokens. Our work presents a novel image captioning model called RelNet-MAM, which utilizes a multilevel attention mechanism and relation network. To improve the appearance feature representation, RelNet-MAM uses multilevel attention which consists of spatial attention, channel-wise attention, and semantic attention. For representing the scene-text token effectively, RelNet-MAM uses appearance, FastText, location, and PHOC features for each token. Further, the proposed RelNet-MAM uses the relation network to establish the relationships between the objects and scene-text tokens. Finally, the transformer model together with dynamic pointer networks is used as a decoder in the caption generation process. The proposed RelNet-MAM model outperforms the state-of-the-art models on TextCaps, Flickr30k, and MS COCO datasets. TextCaps requires models to read and reason about the texts in an image for caption generation. MSCOCO and Flickr30k contain diverse images; persons, animals, automobiles, indoor and outdoor scenes. Remarkably, the proposed RelNet-MAM model outperforms the current best model by 2.3% on B-4, 1.8% on METEOR, 2.2% on ROUGE-L, 2.0% on CIDEr-D and 3.0% on SPICE metric scores on TextCaps dataset.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Attention</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Spatial</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Image captioning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multilevel</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Semantic</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Sharma, Himanshu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Microprocessors and microsystems</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier, 1979</subfield><subfield code="g">102</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)271175982</subfield><subfield code="w">(DE-600)1479003-8</subfield><subfield code="w">(DE-576)251938107</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:102</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">53.55</subfield><subfield code="j">Mikroelektronik</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.31</subfield><subfield code="j">Rechnerarchitektur</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">102</subfield></datafield></record></collection>
|
author |
Srivastava, Swati |
spellingShingle |
Srivastava, Swati ddc 510 bkl 53.55 bkl 54.31 misc Attention misc Spatial misc Image captioning misc Multilevel misc Semantic RelNet-MAM: Relation Network with Multilevel Attention Mechanism for Image Captioning |
authorStr |
Srivastava, Swati |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)271175982 |
format |
electronic Article |
dewey-ones |
510 - Mathematics |
delete_txt_mv |
keep |
author_role |
aut aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
topic_title |
510 VZ 53.55 bkl 54.31 bkl RelNet-MAM: Relation Network with Multilevel Attention Mechanism for Image Captioning Attention Spatial Image captioning Multilevel Semantic |
topic |
ddc 510 bkl 53.55 bkl 54.31 misc Attention misc Spatial misc Image captioning misc Multilevel misc Semantic |
topic_unstemmed |
ddc 510 bkl 53.55 bkl 54.31 misc Attention misc Spatial misc Image captioning misc Multilevel misc Semantic |
topic_browse |
ddc 510 bkl 53.55 bkl 54.31 misc Attention misc Spatial misc Image captioning misc Multilevel misc Semantic |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Microprocessors and microsystems |
hierarchy_parent_id |
271175982 |
dewey-tens |
510 - Mathematics |
hierarchy_top_title |
Microprocessors and microsystems |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)271175982 (DE-600)1479003-8 (DE-576)251938107 |
title |
RelNet-MAM: Relation Network with Multilevel Attention Mechanism for Image Captioning |
ctrlnum |
(DE-627)ELV065101014 (ELSEVIER)S0141-9331(23)00175-8 |
title_full |
RelNet-MAM: Relation Network with Multilevel Attention Mechanism for Image Captioning |
author_sort |
Srivastava, Swati |
journal |
Microprocessors and microsystems |
journalStr |
Microprocessors and microsystems |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
500 - Science |
recordtype |
marc |
publishDateSort |
2023 |
contenttype_str_mv |
zzz |
author_browse |
Srivastava, Swati Sharma, Himanshu |
container_volume |
102 |
class |
510 VZ 53.55 bkl 54.31 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Srivastava, Swati |
doi_str_mv |
10.1016/j.micpro.2023.104931 |
dewey-full |
510 |
author2-role |
verfasserin |
title_sort |
relnet-mam: relation network with multilevel attention mechanism for image captioning |
title_auth |
RelNet-MAM: Relation Network with Multilevel Attention Mechanism for Image Captioning |
abstract |
Text present in an image contains rich semantic information which is crucial for the understanding of an image. For example, a signboard having the text “deep water” conveys the danger involved in the image. The current image captioning models do not effectively utilize this useful semantic information due to their limited representation capabilities of scene-text tokens. Our work presents a novel image captioning model called RelNet-MAM, which utilizes a multilevel attention mechanism and relation network. To improve the appearance feature representation, RelNet-MAM uses multilevel attention which consists of spatial attention, channel-wise attention, and semantic attention. For representing the scene-text token effectively, RelNet-MAM uses appearance, FastText, location, and PHOC features for each token. Further, the proposed RelNet-MAM uses the relation network to establish the relationships between the objects and scene-text tokens. Finally, the transformer model together with dynamic pointer networks is used as a decoder in the caption generation process. The proposed RelNet-MAM model outperforms the state-of-the-art models on TextCaps, Flickr30k, and MS COCO datasets. TextCaps requires models to read and reason about the texts in an image for caption generation. MSCOCO and Flickr30k contain diverse images; persons, animals, automobiles, indoor and outdoor scenes. Remarkably, the proposed RelNet-MAM model outperforms the current best model by 2.3% on B-4, 1.8% on METEOR, 2.2% on ROUGE-L, 2.0% on CIDEr-D and 3.0% on SPICE metric scores on TextCaps dataset. |
abstractGer |
Text present in an image contains rich semantic information which is crucial for the understanding of an image. For example, a signboard having the text “deep water” conveys the danger involved in the image. The current image captioning models do not effectively utilize this useful semantic information due to their limited representation capabilities of scene-text tokens. Our work presents a novel image captioning model called RelNet-MAM, which utilizes a multilevel attention mechanism and relation network. To improve the appearance feature representation, RelNet-MAM uses multilevel attention which consists of spatial attention, channel-wise attention, and semantic attention. For representing the scene-text token effectively, RelNet-MAM uses appearance, FastText, location, and PHOC features for each token. Further, the proposed RelNet-MAM uses the relation network to establish the relationships between the objects and scene-text tokens. Finally, the transformer model together with dynamic pointer networks is used as a decoder in the caption generation process. The proposed RelNet-MAM model outperforms the state-of-the-art models on TextCaps, Flickr30k, and MS COCO datasets. TextCaps requires models to read and reason about the texts in an image for caption generation. MSCOCO and Flickr30k contain diverse images; persons, animals, automobiles, indoor and outdoor scenes. Remarkably, the proposed RelNet-MAM model outperforms the current best model by 2.3% on B-4, 1.8% on METEOR, 2.2% on ROUGE-L, 2.0% on CIDEr-D and 3.0% on SPICE metric scores on TextCaps dataset. |
abstract_unstemmed |
Text present in an image contains rich semantic information which is crucial for the understanding of an image. For example, a signboard having the text “deep water” conveys the danger involved in the image. The current image captioning models do not effectively utilize this useful semantic information due to their limited representation capabilities of scene-text tokens. Our work presents a novel image captioning model called RelNet-MAM, which utilizes a multilevel attention mechanism and relation network. To improve the appearance feature representation, RelNet-MAM uses multilevel attention which consists of spatial attention, channel-wise attention, and semantic attention. For representing the scene-text token effectively, RelNet-MAM uses appearance, FastText, location, and PHOC features for each token. Further, the proposed RelNet-MAM uses the relation network to establish the relationships between the objects and scene-text tokens. Finally, the transformer model together with dynamic pointer networks is used as a decoder in the caption generation process. The proposed RelNet-MAM model outperforms the state-of-the-art models on TextCaps, Flickr30k, and MS COCO datasets. TextCaps requires models to read and reason about the texts in an image for caption generation. MSCOCO and Flickr30k contain diverse images; persons, animals, automobiles, indoor and outdoor scenes. Remarkably, the proposed RelNet-MAM model outperforms the current best model by 2.3% on B-4, 1.8% on METEOR, 2.2% on ROUGE-L, 2.0% on CIDEr-D and 3.0% on SPICE metric scores on TextCaps dataset. |
collection_details |
GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 |
title_short |
RelNet-MAM: Relation Network with Multilevel Attention Mechanism for Image Captioning |
remote_bool |
true |
author2 |
Sharma, Himanshu |
author2Str |
Sharma, Himanshu |
ppnlink |
271175982 |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1016/j.micpro.2023.104931 |
up_date |
2024-07-06T21:49:58.417Z |
_version_ |
1803868017439801344 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">ELV065101014</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20231013093036.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">231013s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.micpro.2023.104931</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV065101014</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0141-9331(23)00175-8</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">510</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">53.55</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.31</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Srivastava, Swati</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">RelNet-MAM: Relation Network with Multilevel Attention Mechanism for Image Captioning</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Text present in an image contains rich semantic information which is crucial for the understanding of an image. For example, a signboard having the text “deep water” conveys the danger involved in the image. The current image captioning models do not effectively utilize this useful semantic information due to their limited representation capabilities of scene-text tokens. Our work presents a novel image captioning model called RelNet-MAM, which utilizes a multilevel attention mechanism and relation network. To improve the appearance feature representation, RelNet-MAM uses multilevel attention which consists of spatial attention, channel-wise attention, and semantic attention. For representing the scene-text token effectively, RelNet-MAM uses appearance, FastText, location, and PHOC features for each token. Further, the proposed RelNet-MAM uses the relation network to establish the relationships between the objects and scene-text tokens. Finally, the transformer model together with dynamic pointer networks is used as a decoder in the caption generation process. The proposed RelNet-MAM model outperforms the state-of-the-art models on TextCaps, Flickr30k, and MS COCO datasets. TextCaps requires models to read and reason about the texts in an image for caption generation. MSCOCO and Flickr30k contain diverse images; persons, animals, automobiles, indoor and outdoor scenes. Remarkably, the proposed RelNet-MAM model outperforms the current best model by 2.3% on B-4, 1.8% on METEOR, 2.2% on ROUGE-L, 2.0% on CIDEr-D and 3.0% on SPICE metric scores on TextCaps dataset.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Attention</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Spatial</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Image captioning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multilevel</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Semantic</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Sharma, Himanshu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Microprocessors and microsystems</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier, 1979</subfield><subfield code="g">102</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)271175982</subfield><subfield code="w">(DE-600)1479003-8</subfield><subfield code="w">(DE-576)251938107</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:102</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">53.55</subfield><subfield code="j">Mikroelektronik</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.31</subfield><subfield code="j">Rechnerarchitektur</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">102</subfield></datafield></record></collection>
|
score |
7.400139 |