Image super-resolution based on deep neural network of multiple attention mechanism
At present, the main super-resolution (SR) method based on convolutional neural network (CNN) is to increase the layer number of the network by skip connection so as to improve the nonlinear expression ability of the model. However, the network also becomes difficult to be trained and converge. In o...
Ausführliche Beschreibung
Autor*in: |
Yang, Xin [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2021transfer abstract |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
Enthalten in: Propolis as lipid bioactive nano-carrier for topical nasal drug delivery - Rassu, Giovanna ELSEVIER, 2015, Orlando, Fla |
---|---|
Übergeordnetes Werk: |
volume:75 ; year:2021 ; pages:0 |
Links: |
---|
DOI / URN: |
10.1016/j.jvcir.2021.103019 |
---|
Katalog-ID: |
ELV053286936 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV053286936 | ||
003 | DE-627 | ||
005 | 20230626034552.0 | ||
007 | cr uuu---uuuuu | ||
008 | 210910s2021 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.jvcir.2021.103019 |2 doi | |
028 | 5 | 2 | |a /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001316.pica |
035 | |a (DE-627)ELV053286936 | ||
035 | |a (ELSEVIER)S1047-3203(21)00001-8 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 540 |q VZ |
082 | 0 | 4 | |a 540 |q VZ |
100 | 1 | |a Yang, Xin |e verfasserin |4 aut | |
245 | 1 | 0 | |a Image super-resolution based on deep neural network of multiple attention mechanism |
264 | 1 | |c 2021transfer abstract | |
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a nicht spezifiziert |b z |2 rdamedia | ||
338 | |a nicht spezifiziert |b zu |2 rdacarrier | ||
520 | |a At present, the main super-resolution (SR) method based on convolutional neural network (CNN) is to increase the layer number of the network by skip connection so as to improve the nonlinear expression ability of the model. However, the network also becomes difficult to be trained and converge. In order to train a smaller but better performance SR model, this paper constructs a novel image SR network of multiple attention mechanism(MAMSR), which includes channel attention mechanism and spatial attention mechanism. By learning the relationship between the channels of the feature map and the relationship between the pixels in each position of the feature map, the network can enhance the ability of feature expression and make the reconstructed image more close to the real image. Experiments on public datasets show that our network surpasses some current state-of-the-art algorithms in PSNR, SSIM, and visual effects. | ||
520 | |a At present, the main super-resolution (SR) method based on convolutional neural network (CNN) is to increase the layer number of the network by skip connection so as to improve the nonlinear expression ability of the model. However, the network also becomes difficult to be trained and converge. In order to train a smaller but better performance SR model, this paper constructs a novel image SR network of multiple attention mechanism(MAMSR), which includes channel attention mechanism and spatial attention mechanism. By learning the relationship between the channels of the feature map and the relationship between the pixels in each position of the feature map, the network can enhance the ability of feature expression and make the reconstructed image more close to the real image. Experiments on public datasets show that our network surpasses some current state-of-the-art algorithms in PSNR, SSIM, and visual effects. | ||
650 | 7 | |a Attention mechanism |2 Elsevier | |
650 | 7 | |a Super-resolution |2 Elsevier | |
650 | 7 | |a CNN |2 Elsevier | |
650 | 7 | |a Spatial attention |2 Elsevier | |
650 | 7 | |a Channel attention |2 Elsevier | |
700 | 1 | |a Li, Xiaochuan |4 oth | |
700 | 1 | |a Li, Zhiqiang |4 oth | |
700 | 1 | |a Zhou, Dake |4 oth | |
773 | 0 | 8 | |i Enthalten in |n Academic Press |a Rassu, Giovanna ELSEVIER |t Propolis as lipid bioactive nano-carrier for topical nasal drug delivery |d 2015 |g Orlando, Fla |w (DE-627)ELV023814993 |
773 | 1 | 8 | |g volume:75 |g year:2021 |g pages:0 |
856 | 4 | 0 | |u https://doi.org/10.1016/j.jvcir.2021.103019 |3 Volltext |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SYSFLAG_U | ||
912 | |a SSG-OLC-PHA | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_21 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_50 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_72 | ||
912 | |a GBV_ILN_136 | ||
912 | |a GBV_ILN_162 | ||
912 | |a GBV_ILN_165 | ||
912 | |a GBV_ILN_176 | ||
912 | |a GBV_ILN_181 | ||
912 | |a GBV_ILN_203 | ||
912 | |a GBV_ILN_227 | ||
912 | |a GBV_ILN_352 | ||
912 | |a GBV_ILN_676 | ||
912 | |a GBV_ILN_791 | ||
912 | |a GBV_ILN_1018 | ||
951 | |a AR | ||
952 | |d 75 |j 2021 |h 0 |
author_variant |
x y xy |
---|---|
matchkey_str |
yangxinlixiaochuanlizhiqiangzhoudake:2021----:mgspreouinaeodenuantokfutpe |
hierarchy_sort_str |
2021transfer abstract |
publishDate |
2021 |
allfields |
10.1016/j.jvcir.2021.103019 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001316.pica (DE-627)ELV053286936 (ELSEVIER)S1047-3203(21)00001-8 DE-627 ger DE-627 rakwb eng 540 VZ 540 VZ Yang, Xin verfasserin aut Image super-resolution based on deep neural network of multiple attention mechanism 2021transfer abstract nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier At present, the main super-resolution (SR) method based on convolutional neural network (CNN) is to increase the layer number of the network by skip connection so as to improve the nonlinear expression ability of the model. However, the network also becomes difficult to be trained and converge. In order to train a smaller but better performance SR model, this paper constructs a novel image SR network of multiple attention mechanism(MAMSR), which includes channel attention mechanism and spatial attention mechanism. By learning the relationship between the channels of the feature map and the relationship between the pixels in each position of the feature map, the network can enhance the ability of feature expression and make the reconstructed image more close to the real image. Experiments on public datasets show that our network surpasses some current state-of-the-art algorithms in PSNR, SSIM, and visual effects. At present, the main super-resolution (SR) method based on convolutional neural network (CNN) is to increase the layer number of the network by skip connection so as to improve the nonlinear expression ability of the model. However, the network also becomes difficult to be trained and converge. In order to train a smaller but better performance SR model, this paper constructs a novel image SR network of multiple attention mechanism(MAMSR), which includes channel attention mechanism and spatial attention mechanism. By learning the relationship between the channels of the feature map and the relationship between the pixels in each position of the feature map, the network can enhance the ability of feature expression and make the reconstructed image more close to the real image. Experiments on public datasets show that our network surpasses some current state-of-the-art algorithms in PSNR, SSIM, and visual effects. Attention mechanism Elsevier Super-resolution Elsevier CNN Elsevier Spatial attention Elsevier Channel attention Elsevier Li, Xiaochuan oth Li, Zhiqiang oth Zhou, Dake oth Enthalten in Academic Press Rassu, Giovanna ELSEVIER Propolis as lipid bioactive nano-carrier for topical nasal drug delivery 2015 Orlando, Fla (DE-627)ELV023814993 volume:75 year:2021 pages:0 https://doi.org/10.1016/j.jvcir.2021.103019 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_11 GBV_ILN_21 GBV_ILN_22 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_50 GBV_ILN_69 GBV_ILN_70 GBV_ILN_72 GBV_ILN_136 GBV_ILN_162 GBV_ILN_165 GBV_ILN_176 GBV_ILN_181 GBV_ILN_203 GBV_ILN_227 GBV_ILN_352 GBV_ILN_676 GBV_ILN_791 GBV_ILN_1018 AR 75 2021 0 |
spelling |
10.1016/j.jvcir.2021.103019 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001316.pica (DE-627)ELV053286936 (ELSEVIER)S1047-3203(21)00001-8 DE-627 ger DE-627 rakwb eng 540 VZ 540 VZ Yang, Xin verfasserin aut Image super-resolution based on deep neural network of multiple attention mechanism 2021transfer abstract nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier At present, the main super-resolution (SR) method based on convolutional neural network (CNN) is to increase the layer number of the network by skip connection so as to improve the nonlinear expression ability of the model. However, the network also becomes difficult to be trained and converge. In order to train a smaller but better performance SR model, this paper constructs a novel image SR network of multiple attention mechanism(MAMSR), which includes channel attention mechanism and spatial attention mechanism. By learning the relationship between the channels of the feature map and the relationship between the pixels in each position of the feature map, the network can enhance the ability of feature expression and make the reconstructed image more close to the real image. Experiments on public datasets show that our network surpasses some current state-of-the-art algorithms in PSNR, SSIM, and visual effects. At present, the main super-resolution (SR) method based on convolutional neural network (CNN) is to increase the layer number of the network by skip connection so as to improve the nonlinear expression ability of the model. However, the network also becomes difficult to be trained and converge. In order to train a smaller but better performance SR model, this paper constructs a novel image SR network of multiple attention mechanism(MAMSR), which includes channel attention mechanism and spatial attention mechanism. By learning the relationship between the channels of the feature map and the relationship between the pixels in each position of the feature map, the network can enhance the ability of feature expression and make the reconstructed image more close to the real image. Experiments on public datasets show that our network surpasses some current state-of-the-art algorithms in PSNR, SSIM, and visual effects. Attention mechanism Elsevier Super-resolution Elsevier CNN Elsevier Spatial attention Elsevier Channel attention Elsevier Li, Xiaochuan oth Li, Zhiqiang oth Zhou, Dake oth Enthalten in Academic Press Rassu, Giovanna ELSEVIER Propolis as lipid bioactive nano-carrier for topical nasal drug delivery 2015 Orlando, Fla (DE-627)ELV023814993 volume:75 year:2021 pages:0 https://doi.org/10.1016/j.jvcir.2021.103019 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_11 GBV_ILN_21 GBV_ILN_22 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_50 GBV_ILN_69 GBV_ILN_70 GBV_ILN_72 GBV_ILN_136 GBV_ILN_162 GBV_ILN_165 GBV_ILN_176 GBV_ILN_181 GBV_ILN_203 GBV_ILN_227 GBV_ILN_352 GBV_ILN_676 GBV_ILN_791 GBV_ILN_1018 AR 75 2021 0 |
allfields_unstemmed |
10.1016/j.jvcir.2021.103019 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001316.pica (DE-627)ELV053286936 (ELSEVIER)S1047-3203(21)00001-8 DE-627 ger DE-627 rakwb eng 540 VZ 540 VZ Yang, Xin verfasserin aut Image super-resolution based on deep neural network of multiple attention mechanism 2021transfer abstract nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier At present, the main super-resolution (SR) method based on convolutional neural network (CNN) is to increase the layer number of the network by skip connection so as to improve the nonlinear expression ability of the model. However, the network also becomes difficult to be trained and converge. In order to train a smaller but better performance SR model, this paper constructs a novel image SR network of multiple attention mechanism(MAMSR), which includes channel attention mechanism and spatial attention mechanism. By learning the relationship between the channels of the feature map and the relationship between the pixels in each position of the feature map, the network can enhance the ability of feature expression and make the reconstructed image more close to the real image. Experiments on public datasets show that our network surpasses some current state-of-the-art algorithms in PSNR, SSIM, and visual effects. At present, the main super-resolution (SR) method based on convolutional neural network (CNN) is to increase the layer number of the network by skip connection so as to improve the nonlinear expression ability of the model. However, the network also becomes difficult to be trained and converge. In order to train a smaller but better performance SR model, this paper constructs a novel image SR network of multiple attention mechanism(MAMSR), which includes channel attention mechanism and spatial attention mechanism. By learning the relationship between the channels of the feature map and the relationship between the pixels in each position of the feature map, the network can enhance the ability of feature expression and make the reconstructed image more close to the real image. Experiments on public datasets show that our network surpasses some current state-of-the-art algorithms in PSNR, SSIM, and visual effects. Attention mechanism Elsevier Super-resolution Elsevier CNN Elsevier Spatial attention Elsevier Channel attention Elsevier Li, Xiaochuan oth Li, Zhiqiang oth Zhou, Dake oth Enthalten in Academic Press Rassu, Giovanna ELSEVIER Propolis as lipid bioactive nano-carrier for topical nasal drug delivery 2015 Orlando, Fla (DE-627)ELV023814993 volume:75 year:2021 pages:0 https://doi.org/10.1016/j.jvcir.2021.103019 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_11 GBV_ILN_21 GBV_ILN_22 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_50 GBV_ILN_69 GBV_ILN_70 GBV_ILN_72 GBV_ILN_136 GBV_ILN_162 GBV_ILN_165 GBV_ILN_176 GBV_ILN_181 GBV_ILN_203 GBV_ILN_227 GBV_ILN_352 GBV_ILN_676 GBV_ILN_791 GBV_ILN_1018 AR 75 2021 0 |
allfieldsGer |
10.1016/j.jvcir.2021.103019 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001316.pica (DE-627)ELV053286936 (ELSEVIER)S1047-3203(21)00001-8 DE-627 ger DE-627 rakwb eng 540 VZ 540 VZ Yang, Xin verfasserin aut Image super-resolution based on deep neural network of multiple attention mechanism 2021transfer abstract nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier At present, the main super-resolution (SR) method based on convolutional neural network (CNN) is to increase the layer number of the network by skip connection so as to improve the nonlinear expression ability of the model. However, the network also becomes difficult to be trained and converge. In order to train a smaller but better performance SR model, this paper constructs a novel image SR network of multiple attention mechanism(MAMSR), which includes channel attention mechanism and spatial attention mechanism. By learning the relationship between the channels of the feature map and the relationship between the pixels in each position of the feature map, the network can enhance the ability of feature expression and make the reconstructed image more close to the real image. Experiments on public datasets show that our network surpasses some current state-of-the-art algorithms in PSNR, SSIM, and visual effects. At present, the main super-resolution (SR) method based on convolutional neural network (CNN) is to increase the layer number of the network by skip connection so as to improve the nonlinear expression ability of the model. However, the network also becomes difficult to be trained and converge. In order to train a smaller but better performance SR model, this paper constructs a novel image SR network of multiple attention mechanism(MAMSR), which includes channel attention mechanism and spatial attention mechanism. By learning the relationship between the channels of the feature map and the relationship between the pixels in each position of the feature map, the network can enhance the ability of feature expression and make the reconstructed image more close to the real image. Experiments on public datasets show that our network surpasses some current state-of-the-art algorithms in PSNR, SSIM, and visual effects. Attention mechanism Elsevier Super-resolution Elsevier CNN Elsevier Spatial attention Elsevier Channel attention Elsevier Li, Xiaochuan oth Li, Zhiqiang oth Zhou, Dake oth Enthalten in Academic Press Rassu, Giovanna ELSEVIER Propolis as lipid bioactive nano-carrier for topical nasal drug delivery 2015 Orlando, Fla (DE-627)ELV023814993 volume:75 year:2021 pages:0 https://doi.org/10.1016/j.jvcir.2021.103019 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_11 GBV_ILN_21 GBV_ILN_22 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_50 GBV_ILN_69 GBV_ILN_70 GBV_ILN_72 GBV_ILN_136 GBV_ILN_162 GBV_ILN_165 GBV_ILN_176 GBV_ILN_181 GBV_ILN_203 GBV_ILN_227 GBV_ILN_352 GBV_ILN_676 GBV_ILN_791 GBV_ILN_1018 AR 75 2021 0 |
allfieldsSound |
10.1016/j.jvcir.2021.103019 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001316.pica (DE-627)ELV053286936 (ELSEVIER)S1047-3203(21)00001-8 DE-627 ger DE-627 rakwb eng 540 VZ 540 VZ Yang, Xin verfasserin aut Image super-resolution based on deep neural network of multiple attention mechanism 2021transfer abstract nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier At present, the main super-resolution (SR) method based on convolutional neural network (CNN) is to increase the layer number of the network by skip connection so as to improve the nonlinear expression ability of the model. However, the network also becomes difficult to be trained and converge. In order to train a smaller but better performance SR model, this paper constructs a novel image SR network of multiple attention mechanism(MAMSR), which includes channel attention mechanism and spatial attention mechanism. By learning the relationship between the channels of the feature map and the relationship between the pixels in each position of the feature map, the network can enhance the ability of feature expression and make the reconstructed image more close to the real image. Experiments on public datasets show that our network surpasses some current state-of-the-art algorithms in PSNR, SSIM, and visual effects. At present, the main super-resolution (SR) method based on convolutional neural network (CNN) is to increase the layer number of the network by skip connection so as to improve the nonlinear expression ability of the model. However, the network also becomes difficult to be trained and converge. In order to train a smaller but better performance SR model, this paper constructs a novel image SR network of multiple attention mechanism(MAMSR), which includes channel attention mechanism and spatial attention mechanism. By learning the relationship between the channels of the feature map and the relationship between the pixels in each position of the feature map, the network can enhance the ability of feature expression and make the reconstructed image more close to the real image. Experiments on public datasets show that our network surpasses some current state-of-the-art algorithms in PSNR, SSIM, and visual effects. Attention mechanism Elsevier Super-resolution Elsevier CNN Elsevier Spatial attention Elsevier Channel attention Elsevier Li, Xiaochuan oth Li, Zhiqiang oth Zhou, Dake oth Enthalten in Academic Press Rassu, Giovanna ELSEVIER Propolis as lipid bioactive nano-carrier for topical nasal drug delivery 2015 Orlando, Fla (DE-627)ELV023814993 volume:75 year:2021 pages:0 https://doi.org/10.1016/j.jvcir.2021.103019 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_11 GBV_ILN_21 GBV_ILN_22 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_50 GBV_ILN_69 GBV_ILN_70 GBV_ILN_72 GBV_ILN_136 GBV_ILN_162 GBV_ILN_165 GBV_ILN_176 GBV_ILN_181 GBV_ILN_203 GBV_ILN_227 GBV_ILN_352 GBV_ILN_676 GBV_ILN_791 GBV_ILN_1018 AR 75 2021 0 |
language |
English |
source |
Enthalten in Propolis as lipid bioactive nano-carrier for topical nasal drug delivery Orlando, Fla volume:75 year:2021 pages:0 |
sourceStr |
Enthalten in Propolis as lipid bioactive nano-carrier for topical nasal drug delivery Orlando, Fla volume:75 year:2021 pages:0 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Attention mechanism Super-resolution CNN Spatial attention Channel attention |
dewey-raw |
540 |
isfreeaccess_bool |
false |
container_title |
Propolis as lipid bioactive nano-carrier for topical nasal drug delivery |
authorswithroles_txt_mv |
Yang, Xin @@aut@@ Li, Xiaochuan @@oth@@ Li, Zhiqiang @@oth@@ Zhou, Dake @@oth@@ |
publishDateDaySort_date |
2021-01-01T00:00:00Z |
hierarchy_top_id |
ELV023814993 |
dewey-sort |
3540 |
id |
ELV053286936 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV053286936</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626034552.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">210910s2021 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.jvcir.2021.103019</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001316.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV053286936</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S1047-3203(21)00001-8</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">540</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">540</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Yang, Xin</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Image super-resolution based on deep neural network of multiple attention mechanism</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2021transfer abstract</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">At present, the main super-resolution (SR) method based on convolutional neural network (CNN) is to increase the layer number of the network by skip connection so as to improve the nonlinear expression ability of the model. However, the network also becomes difficult to be trained and converge. In order to train a smaller but better performance SR model, this paper constructs a novel image SR network of multiple attention mechanism(MAMSR), which includes channel attention mechanism and spatial attention mechanism. By learning the relationship between the channels of the feature map and the relationship between the pixels in each position of the feature map, the network can enhance the ability of feature expression and make the reconstructed image more close to the real image. Experiments on public datasets show that our network surpasses some current state-of-the-art algorithms in PSNR, SSIM, and visual effects.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">At present, the main super-resolution (SR) method based on convolutional neural network (CNN) is to increase the layer number of the network by skip connection so as to improve the nonlinear expression ability of the model. However, the network also becomes difficult to be trained and converge. In order to train a smaller but better performance SR model, this paper constructs a novel image SR network of multiple attention mechanism(MAMSR), which includes channel attention mechanism and spatial attention mechanism. By learning the relationship between the channels of the feature map and the relationship between the pixels in each position of the feature map, the network can enhance the ability of feature expression and make the reconstructed image more close to the real image. Experiments on public datasets show that our network surpasses some current state-of-the-art algorithms in PSNR, SSIM, and visual effects.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Attention mechanism</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Super-resolution</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">CNN</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Spatial attention</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Channel attention</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Xiaochuan</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Zhiqiang</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhou, Dake</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Academic Press</subfield><subfield code="a">Rassu, Giovanna ELSEVIER</subfield><subfield code="t">Propolis as lipid bioactive nano-carrier for topical nasal drug delivery</subfield><subfield code="d">2015</subfield><subfield code="g">Orlando, Fla</subfield><subfield code="w">(DE-627)ELV023814993</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:75</subfield><subfield code="g">year:2021</subfield><subfield code="g">pages:0</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.jvcir.2021.103019</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_21</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_50</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_72</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_136</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_162</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_165</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_176</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_181</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_203</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_227</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_352</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_676</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_791</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_1018</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">75</subfield><subfield code="j">2021</subfield><subfield code="h">0</subfield></datafield></record></collection>
|
author |
Yang, Xin |
spellingShingle |
Yang, Xin ddc 540 Elsevier Attention mechanism Elsevier Super-resolution Elsevier CNN Elsevier Spatial attention Elsevier Channel attention Image super-resolution based on deep neural network of multiple attention mechanism |
authorStr |
Yang, Xin |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)ELV023814993 |
format |
electronic Article |
dewey-ones |
540 - Chemistry & allied sciences |
delete_txt_mv |
keep |
author_role |
aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
topic_title |
540 VZ Image super-resolution based on deep neural network of multiple attention mechanism Attention mechanism Elsevier Super-resolution Elsevier CNN Elsevier Spatial attention Elsevier Channel attention Elsevier |
topic |
ddc 540 Elsevier Attention mechanism Elsevier Super-resolution Elsevier CNN Elsevier Spatial attention Elsevier Channel attention |
topic_unstemmed |
ddc 540 Elsevier Attention mechanism Elsevier Super-resolution Elsevier CNN Elsevier Spatial attention Elsevier Channel attention |
topic_browse |
ddc 540 Elsevier Attention mechanism Elsevier Super-resolution Elsevier CNN Elsevier Spatial attention Elsevier Channel attention |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
zu |
author2_variant |
x l xl z l zl d z dz |
hierarchy_parent_title |
Propolis as lipid bioactive nano-carrier for topical nasal drug delivery |
hierarchy_parent_id |
ELV023814993 |
dewey-tens |
540 - Chemistry |
hierarchy_top_title |
Propolis as lipid bioactive nano-carrier for topical nasal drug delivery |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)ELV023814993 |
title |
Image super-resolution based on deep neural network of multiple attention mechanism |
ctrlnum |
(DE-627)ELV053286936 (ELSEVIER)S1047-3203(21)00001-8 |
title_full |
Image super-resolution based on deep neural network of multiple attention mechanism |
author_sort |
Yang, Xin |
journal |
Propolis as lipid bioactive nano-carrier for topical nasal drug delivery |
journalStr |
Propolis as lipid bioactive nano-carrier for topical nasal drug delivery |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
500 - Science |
recordtype |
marc |
publishDateSort |
2021 |
contenttype_str_mv |
zzz |
container_start_page |
0 |
author_browse |
Yang, Xin |
container_volume |
75 |
class |
540 VZ |
format_se |
Elektronische Aufsätze |
author-letter |
Yang, Xin |
doi_str_mv |
10.1016/j.jvcir.2021.103019 |
dewey-full |
540 |
title_sort |
image super-resolution based on deep neural network of multiple attention mechanism |
title_auth |
Image super-resolution based on deep neural network of multiple attention mechanism |
abstract |
At present, the main super-resolution (SR) method based on convolutional neural network (CNN) is to increase the layer number of the network by skip connection so as to improve the nonlinear expression ability of the model. However, the network also becomes difficult to be trained and converge. In order to train a smaller but better performance SR model, this paper constructs a novel image SR network of multiple attention mechanism(MAMSR), which includes channel attention mechanism and spatial attention mechanism. By learning the relationship between the channels of the feature map and the relationship between the pixels in each position of the feature map, the network can enhance the ability of feature expression and make the reconstructed image more close to the real image. Experiments on public datasets show that our network surpasses some current state-of-the-art algorithms in PSNR, SSIM, and visual effects. |
abstractGer |
At present, the main super-resolution (SR) method based on convolutional neural network (CNN) is to increase the layer number of the network by skip connection so as to improve the nonlinear expression ability of the model. However, the network also becomes difficult to be trained and converge. In order to train a smaller but better performance SR model, this paper constructs a novel image SR network of multiple attention mechanism(MAMSR), which includes channel attention mechanism and spatial attention mechanism. By learning the relationship between the channels of the feature map and the relationship between the pixels in each position of the feature map, the network can enhance the ability of feature expression and make the reconstructed image more close to the real image. Experiments on public datasets show that our network surpasses some current state-of-the-art algorithms in PSNR, SSIM, and visual effects. |
abstract_unstemmed |
At present, the main super-resolution (SR) method based on convolutional neural network (CNN) is to increase the layer number of the network by skip connection so as to improve the nonlinear expression ability of the model. However, the network also becomes difficult to be trained and converge. In order to train a smaller but better performance SR model, this paper constructs a novel image SR network of multiple attention mechanism(MAMSR), which includes channel attention mechanism and spatial attention mechanism. By learning the relationship between the channels of the feature map and the relationship between the pixels in each position of the feature map, the network can enhance the ability of feature expression and make the reconstructed image more close to the real image. Experiments on public datasets show that our network surpasses some current state-of-the-art algorithms in PSNR, SSIM, and visual effects. |
collection_details |
GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_11 GBV_ILN_21 GBV_ILN_22 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_50 GBV_ILN_69 GBV_ILN_70 GBV_ILN_72 GBV_ILN_136 GBV_ILN_162 GBV_ILN_165 GBV_ILN_176 GBV_ILN_181 GBV_ILN_203 GBV_ILN_227 GBV_ILN_352 GBV_ILN_676 GBV_ILN_791 GBV_ILN_1018 |
title_short |
Image super-resolution based on deep neural network of multiple attention mechanism |
url |
https://doi.org/10.1016/j.jvcir.2021.103019 |
remote_bool |
true |
author2 |
Li, Xiaochuan Li, Zhiqiang Zhou, Dake |
author2Str |
Li, Xiaochuan Li, Zhiqiang Zhou, Dake |
ppnlink |
ELV023814993 |
mediatype_str_mv |
z |
isOA_txt |
false |
hochschulschrift_bool |
false |
author2_role |
oth oth oth |
doi_str |
10.1016/j.jvcir.2021.103019 |
up_date |
2024-07-06T18:31:56.502Z |
_version_ |
1803855558346801152 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV053286936</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626034552.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">210910s2021 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.jvcir.2021.103019</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001316.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV053286936</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S1047-3203(21)00001-8</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">540</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">540</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Yang, Xin</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Image super-resolution based on deep neural network of multiple attention mechanism</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2021transfer abstract</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">At present, the main super-resolution (SR) method based on convolutional neural network (CNN) is to increase the layer number of the network by skip connection so as to improve the nonlinear expression ability of the model. However, the network also becomes difficult to be trained and converge. In order to train a smaller but better performance SR model, this paper constructs a novel image SR network of multiple attention mechanism(MAMSR), which includes channel attention mechanism and spatial attention mechanism. By learning the relationship between the channels of the feature map and the relationship between the pixels in each position of the feature map, the network can enhance the ability of feature expression and make the reconstructed image more close to the real image. Experiments on public datasets show that our network surpasses some current state-of-the-art algorithms in PSNR, SSIM, and visual effects.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">At present, the main super-resolution (SR) method based on convolutional neural network (CNN) is to increase the layer number of the network by skip connection so as to improve the nonlinear expression ability of the model. However, the network also becomes difficult to be trained and converge. In order to train a smaller but better performance SR model, this paper constructs a novel image SR network of multiple attention mechanism(MAMSR), which includes channel attention mechanism and spatial attention mechanism. By learning the relationship between the channels of the feature map and the relationship between the pixels in each position of the feature map, the network can enhance the ability of feature expression and make the reconstructed image more close to the real image. Experiments on public datasets show that our network surpasses some current state-of-the-art algorithms in PSNR, SSIM, and visual effects.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Attention mechanism</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Super-resolution</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">CNN</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Spatial attention</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Channel attention</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Xiaochuan</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Zhiqiang</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhou, Dake</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Academic Press</subfield><subfield code="a">Rassu, Giovanna ELSEVIER</subfield><subfield code="t">Propolis as lipid bioactive nano-carrier for topical nasal drug delivery</subfield><subfield code="d">2015</subfield><subfield code="g">Orlando, Fla</subfield><subfield code="w">(DE-627)ELV023814993</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:75</subfield><subfield code="g">year:2021</subfield><subfield code="g">pages:0</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.jvcir.2021.103019</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_21</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_50</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_72</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_136</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_162</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_165</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_176</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_181</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_203</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_227</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_352</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_676</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_791</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_1018</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">75</subfield><subfield code="j">2021</subfield><subfield code="h">0</subfield></datafield></record></collection>
|
score |
7.400078 |