Guided filter-based multi-scale super-resolution reconstruction
The learning-based super-resolution reconstruction method inputs a low-resolution image into a network, and learns a non-linear mapping relationship between low-resolution and high-resolution through the network. In this study, the multi-scale super-resolution reconstruction network is used to fuse...
Ausführliche Beschreibung
Autor*in: |
Xiaomei Feng [verfasserIn] Jinjiang Li [verfasserIn] Zhen Hua [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2020 |
---|
Übergeordnetes Werk: |
In: CAAI Transactions on Intelligence Technology - Wiley, 2019, (2020) |
---|---|
Übergeordnetes Werk: |
year:2020 |
Links: |
---|
DOI / URN: |
10.1049/trit.2019.0065 |
---|
Katalog-ID: |
DOAJ058473351 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ058473351 | ||
003 | DE-627 | ||
005 | 20230308224800.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230228s2020 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1049/trit.2019.0065 |2 doi | |
035 | |a (DE-627)DOAJ058473351 | ||
035 | |a (DE-599)DOAJ1167e49957ba42a3b4ad7bc48dadd6d6 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a P98-98.5 | |
050 | 0 | |a QA76.75-76.765 | |
100 | 0 | |a Xiaomei Feng |e verfasserin |4 aut | |
245 | 1 | 0 | |a Guided filter-based multi-scale super-resolution reconstruction |
264 | 1 | |c 2020 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a The learning-based super-resolution reconstruction method inputs a low-resolution image into a network, and learns a non-linear mapping relationship between low-resolution and high-resolution through the network. In this study, the multi-scale super-resolution reconstruction network is used to fuse the effective features of different scale images, and the non-linear mapping between low resolution and high resolution is studied from coarse to fine to realise the end-to-end super-resolution reconstruction task. The loss of some features of the low-resolution image will negatively affect the quality of the reconstructed image. To solve the problem of incomplete image features in low-resolution, this study adopts the multi-scale super-resolution reconstruction method based on guided image filtering. The high-resolution image reconstructed by the multi-scale super-resolution network and the real high-resolution image are merged by the guide image filter to generate a new image, and the newly generated image is used for secondary training of the multi-scale super-resolution reconstruction network. The newly generated image effectively compensates for the details and texture information lost in the low-resolution image, thereby improving the effect of the super-resolution reconstructed image.Compared with the existing super-resolution reconstruction scheme, the accuracy and speed of super-resolution reconstruction are improved. | ||
650 | 4 | |a image resolution | |
650 | 4 | |a image texture | |
650 | 4 | |a learning (artificial intelligence) | |
650 | 4 | |a image reconstruction | |
650 | 4 | |a image filtering | |
650 | 4 | |a high-resolution image | |
650 | 4 | |a low-resolution image loss | |
650 | 4 | |a super-resolution reconstruction effect | |
650 | 4 | |a guided filter-based multiscale super-resolution reconstruction | |
650 | 4 | |a learning-based super-resolution reconstruction method | |
650 | 4 | |a multiscale super-resolution reconstruction network | |
650 | 4 | |a end-to-end super-resolution reconstruction task | |
650 | 4 | |a multiscale super-resolution reconstruction method | |
650 | 4 | |a guided image filtering | |
650 | 4 | |a multiscale super-resolution network | |
650 | 4 | |a guide image filter | |
650 | 4 | |a newly generated image | |
650 | 4 | |a super-resolution reconstruction scheme | |
653 | 0 | |a Computational linguistics. Natural language processing | |
653 | 0 | |a Computer software | |
700 | 0 | |a Jinjiang Li |e verfasserin |4 aut | |
700 | 0 | |a Zhen Hua |e verfasserin |4 aut | |
700 | 0 | |a Zhen Hua |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t CAAI Transactions on Intelligence Technology |d Wiley, 2019 |g (2020) |w (DE-627)882214071 |w (DE-600)2888181-3 |x 24682322 |7 nnns |
773 | 1 | 8 | |g year:2020 |
856 | 4 | 0 | |u https://doi.org/10.1049/trit.2019.0065 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/1167e49957ba42a3b4ad7bc48dadd6d6 |z kostenfrei |
856 | 4 | 0 | |u https://digital-library.theiet.org/content/journals/10.1049/trit.2019.0065 |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/2468-2322 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_171 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_636 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2006 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2037 | ||
912 | |a GBV_ILN_2038 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2057 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2068 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2108 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2118 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2144 | ||
912 | |a GBV_ILN_2147 | ||
912 | |a GBV_ILN_2148 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_2522 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4046 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4336 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |j 2020 |
author_variant |
x f xf j l jl z h zh z h zh |
---|---|
matchkey_str |
article:24682322:2020----::udditraemliclspreouin |
hierarchy_sort_str |
2020 |
callnumber-subject-code |
P |
publishDate |
2020 |
allfields |
10.1049/trit.2019.0065 doi (DE-627)DOAJ058473351 (DE-599)DOAJ1167e49957ba42a3b4ad7bc48dadd6d6 DE-627 ger DE-627 rakwb eng P98-98.5 QA76.75-76.765 Xiaomei Feng verfasserin aut Guided filter-based multi-scale super-resolution reconstruction 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier The learning-based super-resolution reconstruction method inputs a low-resolution image into a network, and learns a non-linear mapping relationship between low-resolution and high-resolution through the network. In this study, the multi-scale super-resolution reconstruction network is used to fuse the effective features of different scale images, and the non-linear mapping between low resolution and high resolution is studied from coarse to fine to realise the end-to-end super-resolution reconstruction task. The loss of some features of the low-resolution image will negatively affect the quality of the reconstructed image. To solve the problem of incomplete image features in low-resolution, this study adopts the multi-scale super-resolution reconstruction method based on guided image filtering. The high-resolution image reconstructed by the multi-scale super-resolution network and the real high-resolution image are merged by the guide image filter to generate a new image, and the newly generated image is used for secondary training of the multi-scale super-resolution reconstruction network. The newly generated image effectively compensates for the details and texture information lost in the low-resolution image, thereby improving the effect of the super-resolution reconstructed image.Compared with the existing super-resolution reconstruction scheme, the accuracy and speed of super-resolution reconstruction are improved. image resolution image texture learning (artificial intelligence) image reconstruction image filtering high-resolution image low-resolution image loss super-resolution reconstruction effect guided filter-based multiscale super-resolution reconstruction learning-based super-resolution reconstruction method multiscale super-resolution reconstruction network end-to-end super-resolution reconstruction task multiscale super-resolution reconstruction method guided image filtering multiscale super-resolution network guide image filter newly generated image super-resolution reconstruction scheme Computational linguistics. Natural language processing Computer software Jinjiang Li verfasserin aut Zhen Hua verfasserin aut Zhen Hua verfasserin aut In CAAI Transactions on Intelligence Technology Wiley, 2019 (2020) (DE-627)882214071 (DE-600)2888181-3 24682322 nnns year:2020 https://doi.org/10.1049/trit.2019.0065 kostenfrei https://doaj.org/article/1167e49957ba42a3b4ad7bc48dadd6d6 kostenfrei https://digital-library.theiet.org/content/journals/10.1049/trit.2019.0065 kostenfrei https://doaj.org/toc/2468-2322 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 2020 |
spelling |
10.1049/trit.2019.0065 doi (DE-627)DOAJ058473351 (DE-599)DOAJ1167e49957ba42a3b4ad7bc48dadd6d6 DE-627 ger DE-627 rakwb eng P98-98.5 QA76.75-76.765 Xiaomei Feng verfasserin aut Guided filter-based multi-scale super-resolution reconstruction 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier The learning-based super-resolution reconstruction method inputs a low-resolution image into a network, and learns a non-linear mapping relationship between low-resolution and high-resolution through the network. In this study, the multi-scale super-resolution reconstruction network is used to fuse the effective features of different scale images, and the non-linear mapping between low resolution and high resolution is studied from coarse to fine to realise the end-to-end super-resolution reconstruction task. The loss of some features of the low-resolution image will negatively affect the quality of the reconstructed image. To solve the problem of incomplete image features in low-resolution, this study adopts the multi-scale super-resolution reconstruction method based on guided image filtering. The high-resolution image reconstructed by the multi-scale super-resolution network and the real high-resolution image are merged by the guide image filter to generate a new image, and the newly generated image is used for secondary training of the multi-scale super-resolution reconstruction network. The newly generated image effectively compensates for the details and texture information lost in the low-resolution image, thereby improving the effect of the super-resolution reconstructed image.Compared with the existing super-resolution reconstruction scheme, the accuracy and speed of super-resolution reconstruction are improved. image resolution image texture learning (artificial intelligence) image reconstruction image filtering high-resolution image low-resolution image loss super-resolution reconstruction effect guided filter-based multiscale super-resolution reconstruction learning-based super-resolution reconstruction method multiscale super-resolution reconstruction network end-to-end super-resolution reconstruction task multiscale super-resolution reconstruction method guided image filtering multiscale super-resolution network guide image filter newly generated image super-resolution reconstruction scheme Computational linguistics. Natural language processing Computer software Jinjiang Li verfasserin aut Zhen Hua verfasserin aut Zhen Hua verfasserin aut In CAAI Transactions on Intelligence Technology Wiley, 2019 (2020) (DE-627)882214071 (DE-600)2888181-3 24682322 nnns year:2020 https://doi.org/10.1049/trit.2019.0065 kostenfrei https://doaj.org/article/1167e49957ba42a3b4ad7bc48dadd6d6 kostenfrei https://digital-library.theiet.org/content/journals/10.1049/trit.2019.0065 kostenfrei https://doaj.org/toc/2468-2322 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 2020 |
allfields_unstemmed |
10.1049/trit.2019.0065 doi (DE-627)DOAJ058473351 (DE-599)DOAJ1167e49957ba42a3b4ad7bc48dadd6d6 DE-627 ger DE-627 rakwb eng P98-98.5 QA76.75-76.765 Xiaomei Feng verfasserin aut Guided filter-based multi-scale super-resolution reconstruction 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier The learning-based super-resolution reconstruction method inputs a low-resolution image into a network, and learns a non-linear mapping relationship between low-resolution and high-resolution through the network. In this study, the multi-scale super-resolution reconstruction network is used to fuse the effective features of different scale images, and the non-linear mapping between low resolution and high resolution is studied from coarse to fine to realise the end-to-end super-resolution reconstruction task. The loss of some features of the low-resolution image will negatively affect the quality of the reconstructed image. To solve the problem of incomplete image features in low-resolution, this study adopts the multi-scale super-resolution reconstruction method based on guided image filtering. The high-resolution image reconstructed by the multi-scale super-resolution network and the real high-resolution image are merged by the guide image filter to generate a new image, and the newly generated image is used for secondary training of the multi-scale super-resolution reconstruction network. The newly generated image effectively compensates for the details and texture information lost in the low-resolution image, thereby improving the effect of the super-resolution reconstructed image.Compared with the existing super-resolution reconstruction scheme, the accuracy and speed of super-resolution reconstruction are improved. image resolution image texture learning (artificial intelligence) image reconstruction image filtering high-resolution image low-resolution image loss super-resolution reconstruction effect guided filter-based multiscale super-resolution reconstruction learning-based super-resolution reconstruction method multiscale super-resolution reconstruction network end-to-end super-resolution reconstruction task multiscale super-resolution reconstruction method guided image filtering multiscale super-resolution network guide image filter newly generated image super-resolution reconstruction scheme Computational linguistics. Natural language processing Computer software Jinjiang Li verfasserin aut Zhen Hua verfasserin aut Zhen Hua verfasserin aut In CAAI Transactions on Intelligence Technology Wiley, 2019 (2020) (DE-627)882214071 (DE-600)2888181-3 24682322 nnns year:2020 https://doi.org/10.1049/trit.2019.0065 kostenfrei https://doaj.org/article/1167e49957ba42a3b4ad7bc48dadd6d6 kostenfrei https://digital-library.theiet.org/content/journals/10.1049/trit.2019.0065 kostenfrei https://doaj.org/toc/2468-2322 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 2020 |
allfieldsGer |
10.1049/trit.2019.0065 doi (DE-627)DOAJ058473351 (DE-599)DOAJ1167e49957ba42a3b4ad7bc48dadd6d6 DE-627 ger DE-627 rakwb eng P98-98.5 QA76.75-76.765 Xiaomei Feng verfasserin aut Guided filter-based multi-scale super-resolution reconstruction 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier The learning-based super-resolution reconstruction method inputs a low-resolution image into a network, and learns a non-linear mapping relationship between low-resolution and high-resolution through the network. In this study, the multi-scale super-resolution reconstruction network is used to fuse the effective features of different scale images, and the non-linear mapping between low resolution and high resolution is studied from coarse to fine to realise the end-to-end super-resolution reconstruction task. The loss of some features of the low-resolution image will negatively affect the quality of the reconstructed image. To solve the problem of incomplete image features in low-resolution, this study adopts the multi-scale super-resolution reconstruction method based on guided image filtering. The high-resolution image reconstructed by the multi-scale super-resolution network and the real high-resolution image are merged by the guide image filter to generate a new image, and the newly generated image is used for secondary training of the multi-scale super-resolution reconstruction network. The newly generated image effectively compensates for the details and texture information lost in the low-resolution image, thereby improving the effect of the super-resolution reconstructed image.Compared with the existing super-resolution reconstruction scheme, the accuracy and speed of super-resolution reconstruction are improved. image resolution image texture learning (artificial intelligence) image reconstruction image filtering high-resolution image low-resolution image loss super-resolution reconstruction effect guided filter-based multiscale super-resolution reconstruction learning-based super-resolution reconstruction method multiscale super-resolution reconstruction network end-to-end super-resolution reconstruction task multiscale super-resolution reconstruction method guided image filtering multiscale super-resolution network guide image filter newly generated image super-resolution reconstruction scheme Computational linguistics. Natural language processing Computer software Jinjiang Li verfasserin aut Zhen Hua verfasserin aut Zhen Hua verfasserin aut In CAAI Transactions on Intelligence Technology Wiley, 2019 (2020) (DE-627)882214071 (DE-600)2888181-3 24682322 nnns year:2020 https://doi.org/10.1049/trit.2019.0065 kostenfrei https://doaj.org/article/1167e49957ba42a3b4ad7bc48dadd6d6 kostenfrei https://digital-library.theiet.org/content/journals/10.1049/trit.2019.0065 kostenfrei https://doaj.org/toc/2468-2322 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 2020 |
allfieldsSound |
10.1049/trit.2019.0065 doi (DE-627)DOAJ058473351 (DE-599)DOAJ1167e49957ba42a3b4ad7bc48dadd6d6 DE-627 ger DE-627 rakwb eng P98-98.5 QA76.75-76.765 Xiaomei Feng verfasserin aut Guided filter-based multi-scale super-resolution reconstruction 2020 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier The learning-based super-resolution reconstruction method inputs a low-resolution image into a network, and learns a non-linear mapping relationship between low-resolution and high-resolution through the network. In this study, the multi-scale super-resolution reconstruction network is used to fuse the effective features of different scale images, and the non-linear mapping between low resolution and high resolution is studied from coarse to fine to realise the end-to-end super-resolution reconstruction task. The loss of some features of the low-resolution image will negatively affect the quality of the reconstructed image. To solve the problem of incomplete image features in low-resolution, this study adopts the multi-scale super-resolution reconstruction method based on guided image filtering. The high-resolution image reconstructed by the multi-scale super-resolution network and the real high-resolution image are merged by the guide image filter to generate a new image, and the newly generated image is used for secondary training of the multi-scale super-resolution reconstruction network. The newly generated image effectively compensates for the details and texture information lost in the low-resolution image, thereby improving the effect of the super-resolution reconstructed image.Compared with the existing super-resolution reconstruction scheme, the accuracy and speed of super-resolution reconstruction are improved. image resolution image texture learning (artificial intelligence) image reconstruction image filtering high-resolution image low-resolution image loss super-resolution reconstruction effect guided filter-based multiscale super-resolution reconstruction learning-based super-resolution reconstruction method multiscale super-resolution reconstruction network end-to-end super-resolution reconstruction task multiscale super-resolution reconstruction method guided image filtering multiscale super-resolution network guide image filter newly generated image super-resolution reconstruction scheme Computational linguistics. Natural language processing Computer software Jinjiang Li verfasserin aut Zhen Hua verfasserin aut Zhen Hua verfasserin aut In CAAI Transactions on Intelligence Technology Wiley, 2019 (2020) (DE-627)882214071 (DE-600)2888181-3 24682322 nnns year:2020 https://doi.org/10.1049/trit.2019.0065 kostenfrei https://doaj.org/article/1167e49957ba42a3b4ad7bc48dadd6d6 kostenfrei https://digital-library.theiet.org/content/journals/10.1049/trit.2019.0065 kostenfrei https://doaj.org/toc/2468-2322 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 2020 |
language |
English |
source |
In CAAI Transactions on Intelligence Technology (2020) year:2020 |
sourceStr |
In CAAI Transactions on Intelligence Technology (2020) year:2020 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
image resolution image texture learning (artificial intelligence) image reconstruction image filtering high-resolution image low-resolution image loss super-resolution reconstruction effect guided filter-based multiscale super-resolution reconstruction learning-based super-resolution reconstruction method multiscale super-resolution reconstruction network end-to-end super-resolution reconstruction task multiscale super-resolution reconstruction method guided image filtering multiscale super-resolution network guide image filter newly generated image super-resolution reconstruction scheme Computational linguistics. Natural language processing Computer software |
isfreeaccess_bool |
true |
container_title |
CAAI Transactions on Intelligence Technology |
authorswithroles_txt_mv |
Xiaomei Feng @@aut@@ Jinjiang Li @@aut@@ Zhen Hua @@aut@@ |
publishDateDaySort_date |
2020-01-01T00:00:00Z |
hierarchy_top_id |
882214071 |
id |
DOAJ058473351 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ058473351</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230308224800.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230228s2020 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1049/trit.2019.0065</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ058473351</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ1167e49957ba42a3b4ad7bc48dadd6d6</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">P98-98.5</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">QA76.75-76.765</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Xiaomei Feng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Guided filter-based multi-scale super-resolution reconstruction</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">The learning-based super-resolution reconstruction method inputs a low-resolution image into a network, and learns a non-linear mapping relationship between low-resolution and high-resolution through the network. In this study, the multi-scale super-resolution reconstruction network is used to fuse the effective features of different scale images, and the non-linear mapping between low resolution and high resolution is studied from coarse to fine to realise the end-to-end super-resolution reconstruction task. The loss of some features of the low-resolution image will negatively affect the quality of the reconstructed image. To solve the problem of incomplete image features in low-resolution, this study adopts the multi-scale super-resolution reconstruction method based on guided image filtering. The high-resolution image reconstructed by the multi-scale super-resolution network and the real high-resolution image are merged by the guide image filter to generate a new image, and the newly generated image is used for secondary training of the multi-scale super-resolution reconstruction network. The newly generated image effectively compensates for the details and texture information lost in the low-resolution image, thereby improving the effect of the super-resolution reconstructed image.Compared with the existing super-resolution reconstruction scheme, the accuracy and speed of super-resolution reconstruction are improved.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">image resolution</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">image texture</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">learning (artificial intelligence)</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">image reconstruction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">image filtering</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">high-resolution image</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">low-resolution image loss</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">super-resolution reconstruction effect</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">guided filter-based multiscale super-resolution reconstruction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">learning-based super-resolution reconstruction method</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">multiscale super-resolution reconstruction network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">end-to-end super-resolution reconstruction task</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">multiscale super-resolution reconstruction method</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">guided image filtering</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">multiscale super-resolution network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">guide image filter</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">newly generated image</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">super-resolution reconstruction scheme</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Computational linguistics. Natural language processing</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Computer software</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Jinjiang Li</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zhen Hua</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zhen Hua</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">CAAI Transactions on Intelligence Technology</subfield><subfield code="d">Wiley, 2019</subfield><subfield code="g">(2020)</subfield><subfield code="w">(DE-627)882214071</subfield><subfield code="w">(DE-600)2888181-3</subfield><subfield code="x">24682322</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">year:2020</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1049/trit.2019.0065</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/1167e49957ba42a3b4ad7bc48dadd6d6</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://digital-library.theiet.org/content/journals/10.1049/trit.2019.0065</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2468-2322</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="j">2020</subfield></datafield></record></collection>
|
callnumber-first |
P - Language and Literature |
author |
Xiaomei Feng |
spellingShingle |
Xiaomei Feng misc P98-98.5 misc QA76.75-76.765 misc image resolution misc image texture misc learning (artificial intelligence) misc image reconstruction misc image filtering misc high-resolution image misc low-resolution image loss misc super-resolution reconstruction effect misc guided filter-based multiscale super-resolution reconstruction misc learning-based super-resolution reconstruction method misc multiscale super-resolution reconstruction network misc end-to-end super-resolution reconstruction task misc multiscale super-resolution reconstruction method misc guided image filtering misc multiscale super-resolution network misc guide image filter misc newly generated image misc super-resolution reconstruction scheme misc Computational linguistics. Natural language processing misc Computer software Guided filter-based multi-scale super-resolution reconstruction |
authorStr |
Xiaomei Feng |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)882214071 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
P98-98 |
illustrated |
Not Illustrated |
issn |
24682322 |
topic_title |
P98-98.5 QA76.75-76.765 Guided filter-based multi-scale super-resolution reconstruction image resolution image texture learning (artificial intelligence) image reconstruction image filtering high-resolution image low-resolution image loss super-resolution reconstruction effect guided filter-based multiscale super-resolution reconstruction learning-based super-resolution reconstruction method multiscale super-resolution reconstruction network end-to-end super-resolution reconstruction task multiscale super-resolution reconstruction method guided image filtering multiscale super-resolution network guide image filter newly generated image super-resolution reconstruction scheme |
topic |
misc P98-98.5 misc QA76.75-76.765 misc image resolution misc image texture misc learning (artificial intelligence) misc image reconstruction misc image filtering misc high-resolution image misc low-resolution image loss misc super-resolution reconstruction effect misc guided filter-based multiscale super-resolution reconstruction misc learning-based super-resolution reconstruction method misc multiscale super-resolution reconstruction network misc end-to-end super-resolution reconstruction task misc multiscale super-resolution reconstruction method misc guided image filtering misc multiscale super-resolution network misc guide image filter misc newly generated image misc super-resolution reconstruction scheme misc Computational linguistics. Natural language processing misc Computer software |
topic_unstemmed |
misc P98-98.5 misc QA76.75-76.765 misc image resolution misc image texture misc learning (artificial intelligence) misc image reconstruction misc image filtering misc high-resolution image misc low-resolution image loss misc super-resolution reconstruction effect misc guided filter-based multiscale super-resolution reconstruction misc learning-based super-resolution reconstruction method misc multiscale super-resolution reconstruction network misc end-to-end super-resolution reconstruction task misc multiscale super-resolution reconstruction method misc guided image filtering misc multiscale super-resolution network misc guide image filter misc newly generated image misc super-resolution reconstruction scheme misc Computational linguistics. Natural language processing misc Computer software |
topic_browse |
misc P98-98.5 misc QA76.75-76.765 misc image resolution misc image texture misc learning (artificial intelligence) misc image reconstruction misc image filtering misc high-resolution image misc low-resolution image loss misc super-resolution reconstruction effect misc guided filter-based multiscale super-resolution reconstruction misc learning-based super-resolution reconstruction method misc multiscale super-resolution reconstruction network misc end-to-end super-resolution reconstruction task misc multiscale super-resolution reconstruction method misc guided image filtering misc multiscale super-resolution network misc guide image filter misc newly generated image misc super-resolution reconstruction scheme misc Computational linguistics. Natural language processing misc Computer software |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
CAAI Transactions on Intelligence Technology |
hierarchy_parent_id |
882214071 |
hierarchy_top_title |
CAAI Transactions on Intelligence Technology |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)882214071 (DE-600)2888181-3 |
title |
Guided filter-based multi-scale super-resolution reconstruction |
ctrlnum |
(DE-627)DOAJ058473351 (DE-599)DOAJ1167e49957ba42a3b4ad7bc48dadd6d6 |
title_full |
Guided filter-based multi-scale super-resolution reconstruction |
author_sort |
Xiaomei Feng |
journal |
CAAI Transactions on Intelligence Technology |
journalStr |
CAAI Transactions on Intelligence Technology |
callnumber-first-code |
P |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2020 |
contenttype_str_mv |
txt |
author_browse |
Xiaomei Feng Jinjiang Li Zhen Hua |
class |
P98-98.5 QA76.75-76.765 |
format_se |
Elektronische Aufsätze |
author-letter |
Xiaomei Feng |
doi_str_mv |
10.1049/trit.2019.0065 |
author2-role |
verfasserin |
title_sort |
guided filter-based multi-scale super-resolution reconstruction |
callnumber |
P98-98.5 |
title_auth |
Guided filter-based multi-scale super-resolution reconstruction |
abstract |
The learning-based super-resolution reconstruction method inputs a low-resolution image into a network, and learns a non-linear mapping relationship between low-resolution and high-resolution through the network. In this study, the multi-scale super-resolution reconstruction network is used to fuse the effective features of different scale images, and the non-linear mapping between low resolution and high resolution is studied from coarse to fine to realise the end-to-end super-resolution reconstruction task. The loss of some features of the low-resolution image will negatively affect the quality of the reconstructed image. To solve the problem of incomplete image features in low-resolution, this study adopts the multi-scale super-resolution reconstruction method based on guided image filtering. The high-resolution image reconstructed by the multi-scale super-resolution network and the real high-resolution image are merged by the guide image filter to generate a new image, and the newly generated image is used for secondary training of the multi-scale super-resolution reconstruction network. The newly generated image effectively compensates for the details and texture information lost in the low-resolution image, thereby improving the effect of the super-resolution reconstructed image.Compared with the existing super-resolution reconstruction scheme, the accuracy and speed of super-resolution reconstruction are improved. |
abstractGer |
The learning-based super-resolution reconstruction method inputs a low-resolution image into a network, and learns a non-linear mapping relationship between low-resolution and high-resolution through the network. In this study, the multi-scale super-resolution reconstruction network is used to fuse the effective features of different scale images, and the non-linear mapping between low resolution and high resolution is studied from coarse to fine to realise the end-to-end super-resolution reconstruction task. The loss of some features of the low-resolution image will negatively affect the quality of the reconstructed image. To solve the problem of incomplete image features in low-resolution, this study adopts the multi-scale super-resolution reconstruction method based on guided image filtering. The high-resolution image reconstructed by the multi-scale super-resolution network and the real high-resolution image are merged by the guide image filter to generate a new image, and the newly generated image is used for secondary training of the multi-scale super-resolution reconstruction network. The newly generated image effectively compensates for the details and texture information lost in the low-resolution image, thereby improving the effect of the super-resolution reconstructed image.Compared with the existing super-resolution reconstruction scheme, the accuracy and speed of super-resolution reconstruction are improved. |
abstract_unstemmed |
The learning-based super-resolution reconstruction method inputs a low-resolution image into a network, and learns a non-linear mapping relationship between low-resolution and high-resolution through the network. In this study, the multi-scale super-resolution reconstruction network is used to fuse the effective features of different scale images, and the non-linear mapping between low resolution and high resolution is studied from coarse to fine to realise the end-to-end super-resolution reconstruction task. The loss of some features of the low-resolution image will negatively affect the quality of the reconstructed image. To solve the problem of incomplete image features in low-resolution, this study adopts the multi-scale super-resolution reconstruction method based on guided image filtering. The high-resolution image reconstructed by the multi-scale super-resolution network and the real high-resolution image are merged by the guide image filter to generate a new image, and the newly generated image is used for secondary training of the multi-scale super-resolution reconstruction network. The newly generated image effectively compensates for the details and texture information lost in the low-resolution image, thereby improving the effect of the super-resolution reconstructed image.Compared with the existing super-resolution reconstruction scheme, the accuracy and speed of super-resolution reconstruction are improved. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4012 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
title_short |
Guided filter-based multi-scale super-resolution reconstruction |
url |
https://doi.org/10.1049/trit.2019.0065 https://doaj.org/article/1167e49957ba42a3b4ad7bc48dadd6d6 https://digital-library.theiet.org/content/journals/10.1049/trit.2019.0065 https://doaj.org/toc/2468-2322 |
remote_bool |
true |
author2 |
Jinjiang Li Zhen Hua |
author2Str |
Jinjiang Li Zhen Hua |
ppnlink |
882214071 |
callnumber-subject |
P - Philology and Linguistics |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1049/trit.2019.0065 |
callnumber-a |
P98-98.5 |
up_date |
2024-07-03T18:12:10.470Z |
_version_ |
1803582523805007872 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ058473351</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230308224800.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230228s2020 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1049/trit.2019.0065</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ058473351</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ1167e49957ba42a3b4ad7bc48dadd6d6</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">P98-98.5</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">QA76.75-76.765</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Xiaomei Feng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Guided filter-based multi-scale super-resolution reconstruction</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">The learning-based super-resolution reconstruction method inputs a low-resolution image into a network, and learns a non-linear mapping relationship between low-resolution and high-resolution through the network. In this study, the multi-scale super-resolution reconstruction network is used to fuse the effective features of different scale images, and the non-linear mapping between low resolution and high resolution is studied from coarse to fine to realise the end-to-end super-resolution reconstruction task. The loss of some features of the low-resolution image will negatively affect the quality of the reconstructed image. To solve the problem of incomplete image features in low-resolution, this study adopts the multi-scale super-resolution reconstruction method based on guided image filtering. The high-resolution image reconstructed by the multi-scale super-resolution network and the real high-resolution image are merged by the guide image filter to generate a new image, and the newly generated image is used for secondary training of the multi-scale super-resolution reconstruction network. The newly generated image effectively compensates for the details and texture information lost in the low-resolution image, thereby improving the effect of the super-resolution reconstructed image.Compared with the existing super-resolution reconstruction scheme, the accuracy and speed of super-resolution reconstruction are improved.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">image resolution</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">image texture</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">learning (artificial intelligence)</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">image reconstruction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">image filtering</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">high-resolution image</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">low-resolution image loss</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">super-resolution reconstruction effect</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">guided filter-based multiscale super-resolution reconstruction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">learning-based super-resolution reconstruction method</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">multiscale super-resolution reconstruction network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">end-to-end super-resolution reconstruction task</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">multiscale super-resolution reconstruction method</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">guided image filtering</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">multiscale super-resolution network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">guide image filter</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">newly generated image</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">super-resolution reconstruction scheme</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Computational linguistics. Natural language processing</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Computer software</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Jinjiang Li</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zhen Hua</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zhen Hua</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">CAAI Transactions on Intelligence Technology</subfield><subfield code="d">Wiley, 2019</subfield><subfield code="g">(2020)</subfield><subfield code="w">(DE-627)882214071</subfield><subfield code="w">(DE-600)2888181-3</subfield><subfield code="x">24682322</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">year:2020</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1049/trit.2019.0065</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/1167e49957ba42a3b4ad7bc48dadd6d6</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://digital-library.theiet.org/content/journals/10.1049/trit.2019.0065</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2468-2322</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="j">2020</subfield></datafield></record></collection>
|
score |
7.4011526 |