HVLM: Exploring Human-Like Visual Cognition and Language-Memory Network for Visual Dialog
Visual dialog, a visual-language task, enables an AI agent to engage in conversation with humans grounded in a given image. To generate appropriate answers for a series of questions in the dialog, the agent is required to understand the comprehensive visual content of an image and the fine-grained t...
Ausführliche Beschreibung
Autor*in: |
Sun, Kaili [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022transfer abstract |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
Enthalten in: Selective oxidation of 1,2-propanediol to lactic acid catalyzed by nanosized Mg(OH)2-supported bimetallic Au–Pd catalysts - Feng, Yonghai ELSEVIER, 2014, an international journal, Amsterdam [u.a.] |
---|---|
Übergeordnetes Werk: |
volume:59 ; year:2022 ; number:5 ; pages:0 |
Links: |
---|
DOI / URN: |
10.1016/j.ipm.2022.103008 |
---|
Katalog-ID: |
ELV058802215 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV058802215 | ||
003 | DE-627 | ||
005 | 20230626051623.0 | ||
007 | cr uuu---uuuuu | ||
008 | 221103s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.ipm.2022.103008 |2 doi | |
028 | 5 | 2 | |a /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001984.pica |
035 | |a (DE-627)ELV058802215 | ||
035 | |a (ELSEVIER)S0306-4573(22)00119-4 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 540 |q VZ |
082 | 0 | 4 | |a 570 |q VZ |
084 | |a 58.11 |2 bkl | ||
100 | 1 | |a Sun, Kaili |e verfasserin |4 aut | |
245 | 1 | 0 | |a HVLM: Exploring Human-Like Visual Cognition and Language-Memory Network for Visual Dialog |
264 | 1 | |c 2022transfer abstract | |
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a nicht spezifiziert |b z |2 rdamedia | ||
338 | |a nicht spezifiziert |b zu |2 rdacarrier | ||
520 | |a Visual dialog, a visual-language task, enables an AI agent to engage in conversation with humans grounded in a given image. To generate appropriate answers for a series of questions in the dialog, the agent is required to understand the comprehensive visual content of an image and the fine-grained textual context of the dialog. However, previous studies typically utilized the object-level visual feature to represent a whole image, which only focuses on the local perspective of an image but ignores the importance of the global information in an image. In this paper, we proposed a novel model Human-Like Visual Cognitive and Language-Memory Network for Visual Dialog (HVLM), to simulate global and local dual-perspective cognitions in the human visual system and understand an image comprehensively. HVLM consists of two key modules, Local-to-Global Graph Convolutional Visual Cognition (LG-GCVC) and Question-guided Language Topic Memory (T-Mem). Specifically, in the LG-GCVC module, we design a question-guided dual-perspective reasoning to jointly learn visual contents from both local and global perspectives through a simple spectral graph convolution network. Furthermore, in the T-Mem module, we design an iterative learning strategy to gradually enhance fine-grained textual context details via an attention mechanism. Experimental results demonstrate the superiority of our proposed model, which obtains the comparable performance on benchmark datasets VisDial v1.0 and VisDial v0.9. | ||
520 | |a Visual dialog, a visual-language task, enables an AI agent to engage in conversation with humans grounded in a given image. To generate appropriate answers for a series of questions in the dialog, the agent is required to understand the comprehensive visual content of an image and the fine-grained textual context of the dialog. However, previous studies typically utilized the object-level visual feature to represent a whole image, which only focuses on the local perspective of an image but ignores the importance of the global information in an image. In this paper, we proposed a novel model Human-Like Visual Cognitive and Language-Memory Network for Visual Dialog (HVLM), to simulate global and local dual-perspective cognitions in the human visual system and understand an image comprehensively. HVLM consists of two key modules, Local-to-Global Graph Convolutional Visual Cognition (LG-GCVC) and Question-guided Language Topic Memory (T-Mem). Specifically, in the LG-GCVC module, we design a question-guided dual-perspective reasoning to jointly learn visual contents from both local and global perspectives through a simple spectral graph convolution network. Furthermore, in the T-Mem module, we design an iterative learning strategy to gradually enhance fine-grained textual context details via an attention mechanism. Experimental results demonstrate the superiority of our proposed model, which obtains the comparable performance on benchmark datasets VisDial v1.0 and VisDial v0.9. | ||
650 | 7 | |a Dual-perspective reasoning |2 Elsevier | |
650 | 7 | |a Simple spectral graph convolution network |2 Elsevier | |
650 | 7 | |a Visual Dialog |2 Elsevier | |
650 | 7 | |a Visual-language understanding |2 Elsevier | |
700 | 1 | |a Guo, Chi |4 oth | |
700 | 1 | |a Zhang, Huyin |4 oth | |
700 | 1 | |a Li, Yuan |4 oth | |
773 | 0 | 8 | |i Enthalten in |n Elsevier Science |a Feng, Yonghai ELSEVIER |t Selective oxidation of 1,2-propanediol to lactic acid catalyzed by nanosized Mg(OH)2-supported bimetallic Au–Pd catalysts |d 2014 |d an international journal |g Amsterdam [u.a.] |w (DE-627)ELV017696526 |
773 | 1 | 8 | |g volume:59 |g year:2022 |g number:5 |g pages:0 |
856 | 4 | 0 | |u https://doi.org/10.1016/j.ipm.2022.103008 |3 Volltext |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SYSFLAG_U | ||
912 | |a SSG-OLC-PHA | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_70 | ||
936 | b | k | |a 58.11 |j Mechanische Verfahrenstechnik |q VZ |
951 | |a AR | ||
952 | |d 59 |j 2022 |e 5 |h 0 |
author_variant |
k s ks |
---|---|
matchkey_str |
sunkailiguochizhanghuyinliyuan:2022----:vmxlrnhmnieiulontoadagaeeoye |
hierarchy_sort_str |
2022transfer abstract |
bklnumber |
58.11 |
publishDate |
2022 |
allfields |
10.1016/j.ipm.2022.103008 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001984.pica (DE-627)ELV058802215 (ELSEVIER)S0306-4573(22)00119-4 DE-627 ger DE-627 rakwb eng 540 VZ 570 VZ 58.11 bkl Sun, Kaili verfasserin aut HVLM: Exploring Human-Like Visual Cognition and Language-Memory Network for Visual Dialog 2022transfer abstract nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Visual dialog, a visual-language task, enables an AI agent to engage in conversation with humans grounded in a given image. To generate appropriate answers for a series of questions in the dialog, the agent is required to understand the comprehensive visual content of an image and the fine-grained textual context of the dialog. However, previous studies typically utilized the object-level visual feature to represent a whole image, which only focuses on the local perspective of an image but ignores the importance of the global information in an image. In this paper, we proposed a novel model Human-Like Visual Cognitive and Language-Memory Network for Visual Dialog (HVLM), to simulate global and local dual-perspective cognitions in the human visual system and understand an image comprehensively. HVLM consists of two key modules, Local-to-Global Graph Convolutional Visual Cognition (LG-GCVC) and Question-guided Language Topic Memory (T-Mem). Specifically, in the LG-GCVC module, we design a question-guided dual-perspective reasoning to jointly learn visual contents from both local and global perspectives through a simple spectral graph convolution network. Furthermore, in the T-Mem module, we design an iterative learning strategy to gradually enhance fine-grained textual context details via an attention mechanism. Experimental results demonstrate the superiority of our proposed model, which obtains the comparable performance on benchmark datasets VisDial v1.0 and VisDial v0.9. Visual dialog, a visual-language task, enables an AI agent to engage in conversation with humans grounded in a given image. To generate appropriate answers for a series of questions in the dialog, the agent is required to understand the comprehensive visual content of an image and the fine-grained textual context of the dialog. However, previous studies typically utilized the object-level visual feature to represent a whole image, which only focuses on the local perspective of an image but ignores the importance of the global information in an image. In this paper, we proposed a novel model Human-Like Visual Cognitive and Language-Memory Network for Visual Dialog (HVLM), to simulate global and local dual-perspective cognitions in the human visual system and understand an image comprehensively. HVLM consists of two key modules, Local-to-Global Graph Convolutional Visual Cognition (LG-GCVC) and Question-guided Language Topic Memory (T-Mem). Specifically, in the LG-GCVC module, we design a question-guided dual-perspective reasoning to jointly learn visual contents from both local and global perspectives through a simple spectral graph convolution network. Furthermore, in the T-Mem module, we design an iterative learning strategy to gradually enhance fine-grained textual context details via an attention mechanism. Experimental results demonstrate the superiority of our proposed model, which obtains the comparable performance on benchmark datasets VisDial v1.0 and VisDial v0.9. Dual-perspective reasoning Elsevier Simple spectral graph convolution network Elsevier Visual Dialog Elsevier Visual-language understanding Elsevier Guo, Chi oth Zhang, Huyin oth Li, Yuan oth Enthalten in Elsevier Science Feng, Yonghai ELSEVIER Selective oxidation of 1,2-propanediol to lactic acid catalyzed by nanosized Mg(OH)2-supported bimetallic Au–Pd catalysts 2014 an international journal Amsterdam [u.a.] (DE-627)ELV017696526 volume:59 year:2022 number:5 pages:0 https://doi.org/10.1016/j.ipm.2022.103008 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_20 GBV_ILN_23 GBV_ILN_70 58.11 Mechanische Verfahrenstechnik VZ AR 59 2022 5 0 |
spelling |
10.1016/j.ipm.2022.103008 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001984.pica (DE-627)ELV058802215 (ELSEVIER)S0306-4573(22)00119-4 DE-627 ger DE-627 rakwb eng 540 VZ 570 VZ 58.11 bkl Sun, Kaili verfasserin aut HVLM: Exploring Human-Like Visual Cognition and Language-Memory Network for Visual Dialog 2022transfer abstract nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Visual dialog, a visual-language task, enables an AI agent to engage in conversation with humans grounded in a given image. To generate appropriate answers for a series of questions in the dialog, the agent is required to understand the comprehensive visual content of an image and the fine-grained textual context of the dialog. However, previous studies typically utilized the object-level visual feature to represent a whole image, which only focuses on the local perspective of an image but ignores the importance of the global information in an image. In this paper, we proposed a novel model Human-Like Visual Cognitive and Language-Memory Network for Visual Dialog (HVLM), to simulate global and local dual-perspective cognitions in the human visual system and understand an image comprehensively. HVLM consists of two key modules, Local-to-Global Graph Convolutional Visual Cognition (LG-GCVC) and Question-guided Language Topic Memory (T-Mem). Specifically, in the LG-GCVC module, we design a question-guided dual-perspective reasoning to jointly learn visual contents from both local and global perspectives through a simple spectral graph convolution network. Furthermore, in the T-Mem module, we design an iterative learning strategy to gradually enhance fine-grained textual context details via an attention mechanism. Experimental results demonstrate the superiority of our proposed model, which obtains the comparable performance on benchmark datasets VisDial v1.0 and VisDial v0.9. Visual dialog, a visual-language task, enables an AI agent to engage in conversation with humans grounded in a given image. To generate appropriate answers for a series of questions in the dialog, the agent is required to understand the comprehensive visual content of an image and the fine-grained textual context of the dialog. However, previous studies typically utilized the object-level visual feature to represent a whole image, which only focuses on the local perspective of an image but ignores the importance of the global information in an image. In this paper, we proposed a novel model Human-Like Visual Cognitive and Language-Memory Network for Visual Dialog (HVLM), to simulate global and local dual-perspective cognitions in the human visual system and understand an image comprehensively. HVLM consists of two key modules, Local-to-Global Graph Convolutional Visual Cognition (LG-GCVC) and Question-guided Language Topic Memory (T-Mem). Specifically, in the LG-GCVC module, we design a question-guided dual-perspective reasoning to jointly learn visual contents from both local and global perspectives through a simple spectral graph convolution network. Furthermore, in the T-Mem module, we design an iterative learning strategy to gradually enhance fine-grained textual context details via an attention mechanism. Experimental results demonstrate the superiority of our proposed model, which obtains the comparable performance on benchmark datasets VisDial v1.0 and VisDial v0.9. Dual-perspective reasoning Elsevier Simple spectral graph convolution network Elsevier Visual Dialog Elsevier Visual-language understanding Elsevier Guo, Chi oth Zhang, Huyin oth Li, Yuan oth Enthalten in Elsevier Science Feng, Yonghai ELSEVIER Selective oxidation of 1,2-propanediol to lactic acid catalyzed by nanosized Mg(OH)2-supported bimetallic Au–Pd catalysts 2014 an international journal Amsterdam [u.a.] (DE-627)ELV017696526 volume:59 year:2022 number:5 pages:0 https://doi.org/10.1016/j.ipm.2022.103008 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_20 GBV_ILN_23 GBV_ILN_70 58.11 Mechanische Verfahrenstechnik VZ AR 59 2022 5 0 |
allfields_unstemmed |
10.1016/j.ipm.2022.103008 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001984.pica (DE-627)ELV058802215 (ELSEVIER)S0306-4573(22)00119-4 DE-627 ger DE-627 rakwb eng 540 VZ 570 VZ 58.11 bkl Sun, Kaili verfasserin aut HVLM: Exploring Human-Like Visual Cognition and Language-Memory Network for Visual Dialog 2022transfer abstract nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Visual dialog, a visual-language task, enables an AI agent to engage in conversation with humans grounded in a given image. To generate appropriate answers for a series of questions in the dialog, the agent is required to understand the comprehensive visual content of an image and the fine-grained textual context of the dialog. However, previous studies typically utilized the object-level visual feature to represent a whole image, which only focuses on the local perspective of an image but ignores the importance of the global information in an image. In this paper, we proposed a novel model Human-Like Visual Cognitive and Language-Memory Network for Visual Dialog (HVLM), to simulate global and local dual-perspective cognitions in the human visual system and understand an image comprehensively. HVLM consists of two key modules, Local-to-Global Graph Convolutional Visual Cognition (LG-GCVC) and Question-guided Language Topic Memory (T-Mem). Specifically, in the LG-GCVC module, we design a question-guided dual-perspective reasoning to jointly learn visual contents from both local and global perspectives through a simple spectral graph convolution network. Furthermore, in the T-Mem module, we design an iterative learning strategy to gradually enhance fine-grained textual context details via an attention mechanism. Experimental results demonstrate the superiority of our proposed model, which obtains the comparable performance on benchmark datasets VisDial v1.0 and VisDial v0.9. Visual dialog, a visual-language task, enables an AI agent to engage in conversation with humans grounded in a given image. To generate appropriate answers for a series of questions in the dialog, the agent is required to understand the comprehensive visual content of an image and the fine-grained textual context of the dialog. However, previous studies typically utilized the object-level visual feature to represent a whole image, which only focuses on the local perspective of an image but ignores the importance of the global information in an image. In this paper, we proposed a novel model Human-Like Visual Cognitive and Language-Memory Network for Visual Dialog (HVLM), to simulate global and local dual-perspective cognitions in the human visual system and understand an image comprehensively. HVLM consists of two key modules, Local-to-Global Graph Convolutional Visual Cognition (LG-GCVC) and Question-guided Language Topic Memory (T-Mem). Specifically, in the LG-GCVC module, we design a question-guided dual-perspective reasoning to jointly learn visual contents from both local and global perspectives through a simple spectral graph convolution network. Furthermore, in the T-Mem module, we design an iterative learning strategy to gradually enhance fine-grained textual context details via an attention mechanism. Experimental results demonstrate the superiority of our proposed model, which obtains the comparable performance on benchmark datasets VisDial v1.0 and VisDial v0.9. Dual-perspective reasoning Elsevier Simple spectral graph convolution network Elsevier Visual Dialog Elsevier Visual-language understanding Elsevier Guo, Chi oth Zhang, Huyin oth Li, Yuan oth Enthalten in Elsevier Science Feng, Yonghai ELSEVIER Selective oxidation of 1,2-propanediol to lactic acid catalyzed by nanosized Mg(OH)2-supported bimetallic Au–Pd catalysts 2014 an international journal Amsterdam [u.a.] (DE-627)ELV017696526 volume:59 year:2022 number:5 pages:0 https://doi.org/10.1016/j.ipm.2022.103008 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_20 GBV_ILN_23 GBV_ILN_70 58.11 Mechanische Verfahrenstechnik VZ AR 59 2022 5 0 |
allfieldsGer |
10.1016/j.ipm.2022.103008 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001984.pica (DE-627)ELV058802215 (ELSEVIER)S0306-4573(22)00119-4 DE-627 ger DE-627 rakwb eng 540 VZ 570 VZ 58.11 bkl Sun, Kaili verfasserin aut HVLM: Exploring Human-Like Visual Cognition and Language-Memory Network for Visual Dialog 2022transfer abstract nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Visual dialog, a visual-language task, enables an AI agent to engage in conversation with humans grounded in a given image. To generate appropriate answers for a series of questions in the dialog, the agent is required to understand the comprehensive visual content of an image and the fine-grained textual context of the dialog. However, previous studies typically utilized the object-level visual feature to represent a whole image, which only focuses on the local perspective of an image but ignores the importance of the global information in an image. In this paper, we proposed a novel model Human-Like Visual Cognitive and Language-Memory Network for Visual Dialog (HVLM), to simulate global and local dual-perspective cognitions in the human visual system and understand an image comprehensively. HVLM consists of two key modules, Local-to-Global Graph Convolutional Visual Cognition (LG-GCVC) and Question-guided Language Topic Memory (T-Mem). Specifically, in the LG-GCVC module, we design a question-guided dual-perspective reasoning to jointly learn visual contents from both local and global perspectives through a simple spectral graph convolution network. Furthermore, in the T-Mem module, we design an iterative learning strategy to gradually enhance fine-grained textual context details via an attention mechanism. Experimental results demonstrate the superiority of our proposed model, which obtains the comparable performance on benchmark datasets VisDial v1.0 and VisDial v0.9. Visual dialog, a visual-language task, enables an AI agent to engage in conversation with humans grounded in a given image. To generate appropriate answers for a series of questions in the dialog, the agent is required to understand the comprehensive visual content of an image and the fine-grained textual context of the dialog. However, previous studies typically utilized the object-level visual feature to represent a whole image, which only focuses on the local perspective of an image but ignores the importance of the global information in an image. In this paper, we proposed a novel model Human-Like Visual Cognitive and Language-Memory Network for Visual Dialog (HVLM), to simulate global and local dual-perspective cognitions in the human visual system and understand an image comprehensively. HVLM consists of two key modules, Local-to-Global Graph Convolutional Visual Cognition (LG-GCVC) and Question-guided Language Topic Memory (T-Mem). Specifically, in the LG-GCVC module, we design a question-guided dual-perspective reasoning to jointly learn visual contents from both local and global perspectives through a simple spectral graph convolution network. Furthermore, in the T-Mem module, we design an iterative learning strategy to gradually enhance fine-grained textual context details via an attention mechanism. Experimental results demonstrate the superiority of our proposed model, which obtains the comparable performance on benchmark datasets VisDial v1.0 and VisDial v0.9. Dual-perspective reasoning Elsevier Simple spectral graph convolution network Elsevier Visual Dialog Elsevier Visual-language understanding Elsevier Guo, Chi oth Zhang, Huyin oth Li, Yuan oth Enthalten in Elsevier Science Feng, Yonghai ELSEVIER Selective oxidation of 1,2-propanediol to lactic acid catalyzed by nanosized Mg(OH)2-supported bimetallic Au–Pd catalysts 2014 an international journal Amsterdam [u.a.] (DE-627)ELV017696526 volume:59 year:2022 number:5 pages:0 https://doi.org/10.1016/j.ipm.2022.103008 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_20 GBV_ILN_23 GBV_ILN_70 58.11 Mechanische Verfahrenstechnik VZ AR 59 2022 5 0 |
allfieldsSound |
10.1016/j.ipm.2022.103008 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001984.pica (DE-627)ELV058802215 (ELSEVIER)S0306-4573(22)00119-4 DE-627 ger DE-627 rakwb eng 540 VZ 570 VZ 58.11 bkl Sun, Kaili verfasserin aut HVLM: Exploring Human-Like Visual Cognition and Language-Memory Network for Visual Dialog 2022transfer abstract nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Visual dialog, a visual-language task, enables an AI agent to engage in conversation with humans grounded in a given image. To generate appropriate answers for a series of questions in the dialog, the agent is required to understand the comprehensive visual content of an image and the fine-grained textual context of the dialog. However, previous studies typically utilized the object-level visual feature to represent a whole image, which only focuses on the local perspective of an image but ignores the importance of the global information in an image. In this paper, we proposed a novel model Human-Like Visual Cognitive and Language-Memory Network for Visual Dialog (HVLM), to simulate global and local dual-perspective cognitions in the human visual system and understand an image comprehensively. HVLM consists of two key modules, Local-to-Global Graph Convolutional Visual Cognition (LG-GCVC) and Question-guided Language Topic Memory (T-Mem). Specifically, in the LG-GCVC module, we design a question-guided dual-perspective reasoning to jointly learn visual contents from both local and global perspectives through a simple spectral graph convolution network. Furthermore, in the T-Mem module, we design an iterative learning strategy to gradually enhance fine-grained textual context details via an attention mechanism. Experimental results demonstrate the superiority of our proposed model, which obtains the comparable performance on benchmark datasets VisDial v1.0 and VisDial v0.9. Visual dialog, a visual-language task, enables an AI agent to engage in conversation with humans grounded in a given image. To generate appropriate answers for a series of questions in the dialog, the agent is required to understand the comprehensive visual content of an image and the fine-grained textual context of the dialog. However, previous studies typically utilized the object-level visual feature to represent a whole image, which only focuses on the local perspective of an image but ignores the importance of the global information in an image. In this paper, we proposed a novel model Human-Like Visual Cognitive and Language-Memory Network for Visual Dialog (HVLM), to simulate global and local dual-perspective cognitions in the human visual system and understand an image comprehensively. HVLM consists of two key modules, Local-to-Global Graph Convolutional Visual Cognition (LG-GCVC) and Question-guided Language Topic Memory (T-Mem). Specifically, in the LG-GCVC module, we design a question-guided dual-perspective reasoning to jointly learn visual contents from both local and global perspectives through a simple spectral graph convolution network. Furthermore, in the T-Mem module, we design an iterative learning strategy to gradually enhance fine-grained textual context details via an attention mechanism. Experimental results demonstrate the superiority of our proposed model, which obtains the comparable performance on benchmark datasets VisDial v1.0 and VisDial v0.9. Dual-perspective reasoning Elsevier Simple spectral graph convolution network Elsevier Visual Dialog Elsevier Visual-language understanding Elsevier Guo, Chi oth Zhang, Huyin oth Li, Yuan oth Enthalten in Elsevier Science Feng, Yonghai ELSEVIER Selective oxidation of 1,2-propanediol to lactic acid catalyzed by nanosized Mg(OH)2-supported bimetallic Au–Pd catalysts 2014 an international journal Amsterdam [u.a.] (DE-627)ELV017696526 volume:59 year:2022 number:5 pages:0 https://doi.org/10.1016/j.ipm.2022.103008 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_20 GBV_ILN_23 GBV_ILN_70 58.11 Mechanische Verfahrenstechnik VZ AR 59 2022 5 0 |
language |
English |
source |
Enthalten in Selective oxidation of 1,2-propanediol to lactic acid catalyzed by nanosized Mg(OH)2-supported bimetallic Au–Pd catalysts Amsterdam [u.a.] volume:59 year:2022 number:5 pages:0 |
sourceStr |
Enthalten in Selective oxidation of 1,2-propanediol to lactic acid catalyzed by nanosized Mg(OH)2-supported bimetallic Au–Pd catalysts Amsterdam [u.a.] volume:59 year:2022 number:5 pages:0 |
format_phy_str_mv |
Article |
bklname |
Mechanische Verfahrenstechnik |
institution |
findex.gbv.de |
topic_facet |
Dual-perspective reasoning Simple spectral graph convolution network Visual Dialog Visual-language understanding |
dewey-raw |
540 |
isfreeaccess_bool |
false |
container_title |
Selective oxidation of 1,2-propanediol to lactic acid catalyzed by nanosized Mg(OH)2-supported bimetallic Au–Pd catalysts |
authorswithroles_txt_mv |
Sun, Kaili @@aut@@ Guo, Chi @@oth@@ Zhang, Huyin @@oth@@ Li, Yuan @@oth@@ |
publishDateDaySort_date |
2022-01-01T00:00:00Z |
hierarchy_top_id |
ELV017696526 |
dewey-sort |
3540 |
id |
ELV058802215 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV058802215</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626051623.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">221103s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.ipm.2022.103008</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001984.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV058802215</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0306-4573(22)00119-4</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">540</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">570</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">58.11</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Sun, Kaili</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">HVLM: Exploring Human-Like Visual Cognition and Language-Memory Network for Visual Dialog</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022transfer abstract</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Visual dialog, a visual-language task, enables an AI agent to engage in conversation with humans grounded in a given image. To generate appropriate answers for a series of questions in the dialog, the agent is required to understand the comprehensive visual content of an image and the fine-grained textual context of the dialog. However, previous studies typically utilized the object-level visual feature to represent a whole image, which only focuses on the local perspective of an image but ignores the importance of the global information in an image. In this paper, we proposed a novel model Human-Like Visual Cognitive and Language-Memory Network for Visual Dialog (HVLM), to simulate global and local dual-perspective cognitions in the human visual system and understand an image comprehensively. HVLM consists of two key modules, Local-to-Global Graph Convolutional Visual Cognition (LG-GCVC) and Question-guided Language Topic Memory (T-Mem). Specifically, in the LG-GCVC module, we design a question-guided dual-perspective reasoning to jointly learn visual contents from both local and global perspectives through a simple spectral graph convolution network. Furthermore, in the T-Mem module, we design an iterative learning strategy to gradually enhance fine-grained textual context details via an attention mechanism. Experimental results demonstrate the superiority of our proposed model, which obtains the comparable performance on benchmark datasets VisDial v1.0 and VisDial v0.9.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Visual dialog, a visual-language task, enables an AI agent to engage in conversation with humans grounded in a given image. To generate appropriate answers for a series of questions in the dialog, the agent is required to understand the comprehensive visual content of an image and the fine-grained textual context of the dialog. However, previous studies typically utilized the object-level visual feature to represent a whole image, which only focuses on the local perspective of an image but ignores the importance of the global information in an image. In this paper, we proposed a novel model Human-Like Visual Cognitive and Language-Memory Network for Visual Dialog (HVLM), to simulate global and local dual-perspective cognitions in the human visual system and understand an image comprehensively. HVLM consists of two key modules, Local-to-Global Graph Convolutional Visual Cognition (LG-GCVC) and Question-guided Language Topic Memory (T-Mem). Specifically, in the LG-GCVC module, we design a question-guided dual-perspective reasoning to jointly learn visual contents from both local and global perspectives through a simple spectral graph convolution network. Furthermore, in the T-Mem module, we design an iterative learning strategy to gradually enhance fine-grained textual context details via an attention mechanism. Experimental results demonstrate the superiority of our proposed model, which obtains the comparable performance on benchmark datasets VisDial v1.0 and VisDial v0.9.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Dual-perspective reasoning</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Simple spectral graph convolution network</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Visual Dialog</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Visual-language understanding</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Guo, Chi</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhang, Huyin</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Yuan</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Elsevier Science</subfield><subfield code="a">Feng, Yonghai ELSEVIER</subfield><subfield code="t">Selective oxidation of 1,2-propanediol to lactic acid catalyzed by nanosized Mg(OH)2-supported bimetallic Au–Pd catalysts</subfield><subfield code="d">2014</subfield><subfield code="d">an international journal</subfield><subfield code="g">Amsterdam [u.a.]</subfield><subfield code="w">(DE-627)ELV017696526</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:59</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:5</subfield><subfield code="g">pages:0</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.ipm.2022.103008</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">58.11</subfield><subfield code="j">Mechanische Verfahrenstechnik</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">59</subfield><subfield code="j">2022</subfield><subfield code="e">5</subfield><subfield code="h">0</subfield></datafield></record></collection>
|
author |
Sun, Kaili |
spellingShingle |
Sun, Kaili ddc 540 ddc 570 bkl 58.11 Elsevier Dual-perspective reasoning Elsevier Simple spectral graph convolution network Elsevier Visual Dialog Elsevier Visual-language understanding HVLM: Exploring Human-Like Visual Cognition and Language-Memory Network for Visual Dialog |
authorStr |
Sun, Kaili |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)ELV017696526 |
format |
electronic Article |
dewey-ones |
540 - Chemistry & allied sciences 570 - Life sciences; biology |
delete_txt_mv |
keep |
author_role |
aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
topic_title |
540 VZ 570 VZ 58.11 bkl HVLM: Exploring Human-Like Visual Cognition and Language-Memory Network for Visual Dialog Dual-perspective reasoning Elsevier Simple spectral graph convolution network Elsevier Visual Dialog Elsevier Visual-language understanding Elsevier |
topic |
ddc 540 ddc 570 bkl 58.11 Elsevier Dual-perspective reasoning Elsevier Simple spectral graph convolution network Elsevier Visual Dialog Elsevier Visual-language understanding |
topic_unstemmed |
ddc 540 ddc 570 bkl 58.11 Elsevier Dual-perspective reasoning Elsevier Simple spectral graph convolution network Elsevier Visual Dialog Elsevier Visual-language understanding |
topic_browse |
ddc 540 ddc 570 bkl 58.11 Elsevier Dual-perspective reasoning Elsevier Simple spectral graph convolution network Elsevier Visual Dialog Elsevier Visual-language understanding |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
zu |
author2_variant |
c g cg h z hz y l yl |
hierarchy_parent_title |
Selective oxidation of 1,2-propanediol to lactic acid catalyzed by nanosized Mg(OH)2-supported bimetallic Au–Pd catalysts |
hierarchy_parent_id |
ELV017696526 |
dewey-tens |
540 - Chemistry 570 - Life sciences; biology |
hierarchy_top_title |
Selective oxidation of 1,2-propanediol to lactic acid catalyzed by nanosized Mg(OH)2-supported bimetallic Au–Pd catalysts |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)ELV017696526 |
title |
HVLM: Exploring Human-Like Visual Cognition and Language-Memory Network for Visual Dialog |
ctrlnum |
(DE-627)ELV058802215 (ELSEVIER)S0306-4573(22)00119-4 |
title_full |
HVLM: Exploring Human-Like Visual Cognition and Language-Memory Network for Visual Dialog |
author_sort |
Sun, Kaili |
journal |
Selective oxidation of 1,2-propanediol to lactic acid catalyzed by nanosized Mg(OH)2-supported bimetallic Au–Pd catalysts |
journalStr |
Selective oxidation of 1,2-propanediol to lactic acid catalyzed by nanosized Mg(OH)2-supported bimetallic Au–Pd catalysts |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
500 - Science |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
zzz |
container_start_page |
0 |
author_browse |
Sun, Kaili |
container_volume |
59 |
class |
540 VZ 570 VZ 58.11 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Sun, Kaili |
doi_str_mv |
10.1016/j.ipm.2022.103008 |
dewey-full |
540 570 |
title_sort |
hvlm: exploring human-like visual cognition and language-memory network for visual dialog |
title_auth |
HVLM: Exploring Human-Like Visual Cognition and Language-Memory Network for Visual Dialog |
abstract |
Visual dialog, a visual-language task, enables an AI agent to engage in conversation with humans grounded in a given image. To generate appropriate answers for a series of questions in the dialog, the agent is required to understand the comprehensive visual content of an image and the fine-grained textual context of the dialog. However, previous studies typically utilized the object-level visual feature to represent a whole image, which only focuses on the local perspective of an image but ignores the importance of the global information in an image. In this paper, we proposed a novel model Human-Like Visual Cognitive and Language-Memory Network for Visual Dialog (HVLM), to simulate global and local dual-perspective cognitions in the human visual system and understand an image comprehensively. HVLM consists of two key modules, Local-to-Global Graph Convolutional Visual Cognition (LG-GCVC) and Question-guided Language Topic Memory (T-Mem). Specifically, in the LG-GCVC module, we design a question-guided dual-perspective reasoning to jointly learn visual contents from both local and global perspectives through a simple spectral graph convolution network. Furthermore, in the T-Mem module, we design an iterative learning strategy to gradually enhance fine-grained textual context details via an attention mechanism. Experimental results demonstrate the superiority of our proposed model, which obtains the comparable performance on benchmark datasets VisDial v1.0 and VisDial v0.9. |
abstractGer |
Visual dialog, a visual-language task, enables an AI agent to engage in conversation with humans grounded in a given image. To generate appropriate answers for a series of questions in the dialog, the agent is required to understand the comprehensive visual content of an image and the fine-grained textual context of the dialog. However, previous studies typically utilized the object-level visual feature to represent a whole image, which only focuses on the local perspective of an image but ignores the importance of the global information in an image. In this paper, we proposed a novel model Human-Like Visual Cognitive and Language-Memory Network for Visual Dialog (HVLM), to simulate global and local dual-perspective cognitions in the human visual system and understand an image comprehensively. HVLM consists of two key modules, Local-to-Global Graph Convolutional Visual Cognition (LG-GCVC) and Question-guided Language Topic Memory (T-Mem). Specifically, in the LG-GCVC module, we design a question-guided dual-perspective reasoning to jointly learn visual contents from both local and global perspectives through a simple spectral graph convolution network. Furthermore, in the T-Mem module, we design an iterative learning strategy to gradually enhance fine-grained textual context details via an attention mechanism. Experimental results demonstrate the superiority of our proposed model, which obtains the comparable performance on benchmark datasets VisDial v1.0 and VisDial v0.9. |
abstract_unstemmed |
Visual dialog, a visual-language task, enables an AI agent to engage in conversation with humans grounded in a given image. To generate appropriate answers for a series of questions in the dialog, the agent is required to understand the comprehensive visual content of an image and the fine-grained textual context of the dialog. However, previous studies typically utilized the object-level visual feature to represent a whole image, which only focuses on the local perspective of an image but ignores the importance of the global information in an image. In this paper, we proposed a novel model Human-Like Visual Cognitive and Language-Memory Network for Visual Dialog (HVLM), to simulate global and local dual-perspective cognitions in the human visual system and understand an image comprehensively. HVLM consists of two key modules, Local-to-Global Graph Convolutional Visual Cognition (LG-GCVC) and Question-guided Language Topic Memory (T-Mem). Specifically, in the LG-GCVC module, we design a question-guided dual-perspective reasoning to jointly learn visual contents from both local and global perspectives through a simple spectral graph convolution network. Furthermore, in the T-Mem module, we design an iterative learning strategy to gradually enhance fine-grained textual context details via an attention mechanism. Experimental results demonstrate the superiority of our proposed model, which obtains the comparable performance on benchmark datasets VisDial v1.0 and VisDial v0.9. |
collection_details |
GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_20 GBV_ILN_23 GBV_ILN_70 |
container_issue |
5 |
title_short |
HVLM: Exploring Human-Like Visual Cognition and Language-Memory Network for Visual Dialog |
url |
https://doi.org/10.1016/j.ipm.2022.103008 |
remote_bool |
true |
author2 |
Guo, Chi Zhang, Huyin Li, Yuan |
author2Str |
Guo, Chi Zhang, Huyin Li, Yuan |
ppnlink |
ELV017696526 |
mediatype_str_mv |
z |
isOA_txt |
false |
hochschulschrift_bool |
false |
author2_role |
oth oth oth |
doi_str |
10.1016/j.ipm.2022.103008 |
up_date |
2024-07-06T20:05:57.896Z |
_version_ |
1803861473777156096 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV058802215</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626051623.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">221103s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.ipm.2022.103008</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001984.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV058802215</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0306-4573(22)00119-4</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">540</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">570</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">58.11</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Sun, Kaili</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">HVLM: Exploring Human-Like Visual Cognition and Language-Memory Network for Visual Dialog</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022transfer abstract</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Visual dialog, a visual-language task, enables an AI agent to engage in conversation with humans grounded in a given image. To generate appropriate answers for a series of questions in the dialog, the agent is required to understand the comprehensive visual content of an image and the fine-grained textual context of the dialog. However, previous studies typically utilized the object-level visual feature to represent a whole image, which only focuses on the local perspective of an image but ignores the importance of the global information in an image. In this paper, we proposed a novel model Human-Like Visual Cognitive and Language-Memory Network for Visual Dialog (HVLM), to simulate global and local dual-perspective cognitions in the human visual system and understand an image comprehensively. HVLM consists of two key modules, Local-to-Global Graph Convolutional Visual Cognition (LG-GCVC) and Question-guided Language Topic Memory (T-Mem). Specifically, in the LG-GCVC module, we design a question-guided dual-perspective reasoning to jointly learn visual contents from both local and global perspectives through a simple spectral graph convolution network. Furthermore, in the T-Mem module, we design an iterative learning strategy to gradually enhance fine-grained textual context details via an attention mechanism. Experimental results demonstrate the superiority of our proposed model, which obtains the comparable performance on benchmark datasets VisDial v1.0 and VisDial v0.9.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Visual dialog, a visual-language task, enables an AI agent to engage in conversation with humans grounded in a given image. To generate appropriate answers for a series of questions in the dialog, the agent is required to understand the comprehensive visual content of an image and the fine-grained textual context of the dialog. However, previous studies typically utilized the object-level visual feature to represent a whole image, which only focuses on the local perspective of an image but ignores the importance of the global information in an image. In this paper, we proposed a novel model Human-Like Visual Cognitive and Language-Memory Network for Visual Dialog (HVLM), to simulate global and local dual-perspective cognitions in the human visual system and understand an image comprehensively. HVLM consists of two key modules, Local-to-Global Graph Convolutional Visual Cognition (LG-GCVC) and Question-guided Language Topic Memory (T-Mem). Specifically, in the LG-GCVC module, we design a question-guided dual-perspective reasoning to jointly learn visual contents from both local and global perspectives through a simple spectral graph convolution network. Furthermore, in the T-Mem module, we design an iterative learning strategy to gradually enhance fine-grained textual context details via an attention mechanism. Experimental results demonstrate the superiority of our proposed model, which obtains the comparable performance on benchmark datasets VisDial v1.0 and VisDial v0.9.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Dual-perspective reasoning</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Simple spectral graph convolution network</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Visual Dialog</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Visual-language understanding</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Guo, Chi</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhang, Huyin</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Yuan</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Elsevier Science</subfield><subfield code="a">Feng, Yonghai ELSEVIER</subfield><subfield code="t">Selective oxidation of 1,2-propanediol to lactic acid catalyzed by nanosized Mg(OH)2-supported bimetallic Au–Pd catalysts</subfield><subfield code="d">2014</subfield><subfield code="d">an international journal</subfield><subfield code="g">Amsterdam [u.a.]</subfield><subfield code="w">(DE-627)ELV017696526</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:59</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:5</subfield><subfield code="g">pages:0</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.ipm.2022.103008</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">58.11</subfield><subfield code="j">Mechanische Verfahrenstechnik</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">59</subfield><subfield code="j">2022</subfield><subfield code="e">5</subfield><subfield code="h">0</subfield></datafield></record></collection>
|
score |
7.4000883 |