Deep Direction-Context-Inspiration Network for Defocus Region Detection in Natural Images
Defocus region detection (DRD) problem aims to assign per-pixel predictions of focus clear areas and defocus blur areas. One of the challenges in this problem is to accurately detect the boundary of the transition region between the focus and defocus regions. To address this issue, the paper propose...
Ausführliche Beschreibung
Autor*in: |
Fan Zhao [verfasserIn] Haipeng Wang [verfasserIn] Wenda Zhao [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2019 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
In: IEEE Access - IEEE, 2014, 7(2019), Seite 64737-64743 |
---|---|
Übergeordnetes Werk: |
volume:7 ; year:2019 ; pages:64737-64743 |
Links: |
---|
DOI / URN: |
10.1109/ACCESS.2019.2916332 |
---|
Katalog-ID: |
DOAJ051963744 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ051963744 | ||
003 | DE-627 | ||
005 | 20230308163552.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230227s2019 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1109/ACCESS.2019.2916332 |2 doi | |
035 | |a (DE-627)DOAJ051963744 | ||
035 | |a (DE-599)DOAJ1f36b1e88d434a2d8cc044e8eef8d518 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a TK1-9971 | |
100 | 0 | |a Fan Zhao |e verfasserin |4 aut | |
245 | 1 | 0 | |a Deep Direction-Context-Inspiration Network for Defocus Region Detection in Natural Images |
264 | 1 | |c 2019 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Defocus region detection (DRD) problem aims to assign per-pixel predictions of focus clear areas and defocus blur areas. One of the challenges in this problem is to accurately detect the boundary of the transition region between the focus and defocus regions. To address this issue, the paper proposes a direction-context-inspiration network (DCINet), which can take advantage of the directional context effectively. First, we extract directional context by recurrent neural networks initialized with the identity matrix (IRNN) to weight the feature maps and integrate them in the two-group integration method, which can produce the coarse DRD maps. Second, the maps are level-integrated with the source image guiding and the coarse maps are refined gradually. The overall DCINet can integrate low-level details and high-level semantics efficiently. The Experimental results demonstrate that the network can detect the boundary of the transition region precisely, achieving the state-of-the-art performance. | ||
650 | 4 | |a Defocus region detection | |
650 | 4 | |a direction-context-inspiration network | |
650 | 4 | |a level-integration | |
653 | 0 | |a Electrical engineering. Electronics. Nuclear engineering | |
700 | 0 | |a Haipeng Wang |e verfasserin |4 aut | |
700 | 0 | |a Wenda Zhao |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t IEEE Access |d IEEE, 2014 |g 7(2019), Seite 64737-64743 |w (DE-627)728440385 |w (DE-600)2687964-5 |x 21693536 |7 nnns |
773 | 1 | 8 | |g volume:7 |g year:2019 |g pages:64737-64743 |
856 | 4 | 0 | |u https://doi.org/10.1109/ACCESS.2019.2916332 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/1f36b1e88d434a2d8cc044e8eef8d518 |z kostenfrei |
856 | 4 | 0 | |u https://ieeexplore.ieee.org/document/8713543/ |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/2169-3536 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 7 |j 2019 |h 64737-64743 |
author_variant |
f z fz h w hw w z wz |
---|---|
matchkey_str |
article:21693536:2019----::epietocnetnprtontokodfcseine |
hierarchy_sort_str |
2019 |
callnumber-subject-code |
TK |
publishDate |
2019 |
allfields |
10.1109/ACCESS.2019.2916332 doi (DE-627)DOAJ051963744 (DE-599)DOAJ1f36b1e88d434a2d8cc044e8eef8d518 DE-627 ger DE-627 rakwb eng TK1-9971 Fan Zhao verfasserin aut Deep Direction-Context-Inspiration Network for Defocus Region Detection in Natural Images 2019 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Defocus region detection (DRD) problem aims to assign per-pixel predictions of focus clear areas and defocus blur areas. One of the challenges in this problem is to accurately detect the boundary of the transition region between the focus and defocus regions. To address this issue, the paper proposes a direction-context-inspiration network (DCINet), which can take advantage of the directional context effectively. First, we extract directional context by recurrent neural networks initialized with the identity matrix (IRNN) to weight the feature maps and integrate them in the two-group integration method, which can produce the coarse DRD maps. Second, the maps are level-integrated with the source image guiding and the coarse maps are refined gradually. The overall DCINet can integrate low-level details and high-level semantics efficiently. The Experimental results demonstrate that the network can detect the boundary of the transition region precisely, achieving the state-of-the-art performance. Defocus region detection direction-context-inspiration network level-integration Electrical engineering. Electronics. Nuclear engineering Haipeng Wang verfasserin aut Wenda Zhao verfasserin aut In IEEE Access IEEE, 2014 7(2019), Seite 64737-64743 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:7 year:2019 pages:64737-64743 https://doi.org/10.1109/ACCESS.2019.2916332 kostenfrei https://doaj.org/article/1f36b1e88d434a2d8cc044e8eef8d518 kostenfrei https://ieeexplore.ieee.org/document/8713543/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 7 2019 64737-64743 |
spelling |
10.1109/ACCESS.2019.2916332 doi (DE-627)DOAJ051963744 (DE-599)DOAJ1f36b1e88d434a2d8cc044e8eef8d518 DE-627 ger DE-627 rakwb eng TK1-9971 Fan Zhao verfasserin aut Deep Direction-Context-Inspiration Network for Defocus Region Detection in Natural Images 2019 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Defocus region detection (DRD) problem aims to assign per-pixel predictions of focus clear areas and defocus blur areas. One of the challenges in this problem is to accurately detect the boundary of the transition region between the focus and defocus regions. To address this issue, the paper proposes a direction-context-inspiration network (DCINet), which can take advantage of the directional context effectively. First, we extract directional context by recurrent neural networks initialized with the identity matrix (IRNN) to weight the feature maps and integrate them in the two-group integration method, which can produce the coarse DRD maps. Second, the maps are level-integrated with the source image guiding and the coarse maps are refined gradually. The overall DCINet can integrate low-level details and high-level semantics efficiently. The Experimental results demonstrate that the network can detect the boundary of the transition region precisely, achieving the state-of-the-art performance. Defocus region detection direction-context-inspiration network level-integration Electrical engineering. Electronics. Nuclear engineering Haipeng Wang verfasserin aut Wenda Zhao verfasserin aut In IEEE Access IEEE, 2014 7(2019), Seite 64737-64743 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:7 year:2019 pages:64737-64743 https://doi.org/10.1109/ACCESS.2019.2916332 kostenfrei https://doaj.org/article/1f36b1e88d434a2d8cc044e8eef8d518 kostenfrei https://ieeexplore.ieee.org/document/8713543/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 7 2019 64737-64743 |
allfields_unstemmed |
10.1109/ACCESS.2019.2916332 doi (DE-627)DOAJ051963744 (DE-599)DOAJ1f36b1e88d434a2d8cc044e8eef8d518 DE-627 ger DE-627 rakwb eng TK1-9971 Fan Zhao verfasserin aut Deep Direction-Context-Inspiration Network for Defocus Region Detection in Natural Images 2019 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Defocus region detection (DRD) problem aims to assign per-pixel predictions of focus clear areas and defocus blur areas. One of the challenges in this problem is to accurately detect the boundary of the transition region between the focus and defocus regions. To address this issue, the paper proposes a direction-context-inspiration network (DCINet), which can take advantage of the directional context effectively. First, we extract directional context by recurrent neural networks initialized with the identity matrix (IRNN) to weight the feature maps and integrate them in the two-group integration method, which can produce the coarse DRD maps. Second, the maps are level-integrated with the source image guiding and the coarse maps are refined gradually. The overall DCINet can integrate low-level details and high-level semantics efficiently. The Experimental results demonstrate that the network can detect the boundary of the transition region precisely, achieving the state-of-the-art performance. Defocus region detection direction-context-inspiration network level-integration Electrical engineering. Electronics. Nuclear engineering Haipeng Wang verfasserin aut Wenda Zhao verfasserin aut In IEEE Access IEEE, 2014 7(2019), Seite 64737-64743 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:7 year:2019 pages:64737-64743 https://doi.org/10.1109/ACCESS.2019.2916332 kostenfrei https://doaj.org/article/1f36b1e88d434a2d8cc044e8eef8d518 kostenfrei https://ieeexplore.ieee.org/document/8713543/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 7 2019 64737-64743 |
allfieldsGer |
10.1109/ACCESS.2019.2916332 doi (DE-627)DOAJ051963744 (DE-599)DOAJ1f36b1e88d434a2d8cc044e8eef8d518 DE-627 ger DE-627 rakwb eng TK1-9971 Fan Zhao verfasserin aut Deep Direction-Context-Inspiration Network for Defocus Region Detection in Natural Images 2019 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Defocus region detection (DRD) problem aims to assign per-pixel predictions of focus clear areas and defocus blur areas. One of the challenges in this problem is to accurately detect the boundary of the transition region between the focus and defocus regions. To address this issue, the paper proposes a direction-context-inspiration network (DCINet), which can take advantage of the directional context effectively. First, we extract directional context by recurrent neural networks initialized with the identity matrix (IRNN) to weight the feature maps and integrate them in the two-group integration method, which can produce the coarse DRD maps. Second, the maps are level-integrated with the source image guiding and the coarse maps are refined gradually. The overall DCINet can integrate low-level details and high-level semantics efficiently. The Experimental results demonstrate that the network can detect the boundary of the transition region precisely, achieving the state-of-the-art performance. Defocus region detection direction-context-inspiration network level-integration Electrical engineering. Electronics. Nuclear engineering Haipeng Wang verfasserin aut Wenda Zhao verfasserin aut In IEEE Access IEEE, 2014 7(2019), Seite 64737-64743 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:7 year:2019 pages:64737-64743 https://doi.org/10.1109/ACCESS.2019.2916332 kostenfrei https://doaj.org/article/1f36b1e88d434a2d8cc044e8eef8d518 kostenfrei https://ieeexplore.ieee.org/document/8713543/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 7 2019 64737-64743 |
allfieldsSound |
10.1109/ACCESS.2019.2916332 doi (DE-627)DOAJ051963744 (DE-599)DOAJ1f36b1e88d434a2d8cc044e8eef8d518 DE-627 ger DE-627 rakwb eng TK1-9971 Fan Zhao verfasserin aut Deep Direction-Context-Inspiration Network for Defocus Region Detection in Natural Images 2019 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Defocus region detection (DRD) problem aims to assign per-pixel predictions of focus clear areas and defocus blur areas. One of the challenges in this problem is to accurately detect the boundary of the transition region between the focus and defocus regions. To address this issue, the paper proposes a direction-context-inspiration network (DCINet), which can take advantage of the directional context effectively. First, we extract directional context by recurrent neural networks initialized with the identity matrix (IRNN) to weight the feature maps and integrate them in the two-group integration method, which can produce the coarse DRD maps. Second, the maps are level-integrated with the source image guiding and the coarse maps are refined gradually. The overall DCINet can integrate low-level details and high-level semantics efficiently. The Experimental results demonstrate that the network can detect the boundary of the transition region precisely, achieving the state-of-the-art performance. Defocus region detection direction-context-inspiration network level-integration Electrical engineering. Electronics. Nuclear engineering Haipeng Wang verfasserin aut Wenda Zhao verfasserin aut In IEEE Access IEEE, 2014 7(2019), Seite 64737-64743 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:7 year:2019 pages:64737-64743 https://doi.org/10.1109/ACCESS.2019.2916332 kostenfrei https://doaj.org/article/1f36b1e88d434a2d8cc044e8eef8d518 kostenfrei https://ieeexplore.ieee.org/document/8713543/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 7 2019 64737-64743 |
language |
English |
source |
In IEEE Access 7(2019), Seite 64737-64743 volume:7 year:2019 pages:64737-64743 |
sourceStr |
In IEEE Access 7(2019), Seite 64737-64743 volume:7 year:2019 pages:64737-64743 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Defocus region detection direction-context-inspiration network level-integration Electrical engineering. Electronics. Nuclear engineering |
isfreeaccess_bool |
true |
container_title |
IEEE Access |
authorswithroles_txt_mv |
Fan Zhao @@aut@@ Haipeng Wang @@aut@@ Wenda Zhao @@aut@@ |
publishDateDaySort_date |
2019-01-01T00:00:00Z |
hierarchy_top_id |
728440385 |
id |
DOAJ051963744 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ051963744</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230308163552.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230227s2019 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1109/ACCESS.2019.2916332</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ051963744</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ1f36b1e88d434a2d8cc044e8eef8d518</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK1-9971</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Fan Zhao</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Deep Direction-Context-Inspiration Network for Defocus Region Detection in Natural Images</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2019</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Defocus region detection (DRD) problem aims to assign per-pixel predictions of focus clear areas and defocus blur areas. One of the challenges in this problem is to accurately detect the boundary of the transition region between the focus and defocus regions. To address this issue, the paper proposes a direction-context-inspiration network (DCINet), which can take advantage of the directional context effectively. First, we extract directional context by recurrent neural networks initialized with the identity matrix (IRNN) to weight the feature maps and integrate them in the two-group integration method, which can produce the coarse DRD maps. Second, the maps are level-integrated with the source image guiding and the coarse maps are refined gradually. The overall DCINet can integrate low-level details and high-level semantics efficiently. The Experimental results demonstrate that the network can detect the boundary of the transition region precisely, achieving the state-of-the-art performance.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Defocus region detection</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">direction-context-inspiration network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">level-integration</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electrical engineering. Electronics. Nuclear engineering</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Haipeng Wang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Wenda Zhao</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IEEE Access</subfield><subfield code="d">IEEE, 2014</subfield><subfield code="g">7(2019), Seite 64737-64743</subfield><subfield code="w">(DE-627)728440385</subfield><subfield code="w">(DE-600)2687964-5</subfield><subfield code="x">21693536</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:7</subfield><subfield code="g">year:2019</subfield><subfield code="g">pages:64737-64743</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1109/ACCESS.2019.2916332</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/1f36b1e88d434a2d8cc044e8eef8d518</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ieeexplore.ieee.org/document/8713543/</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2169-3536</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">7</subfield><subfield code="j">2019</subfield><subfield code="h">64737-64743</subfield></datafield></record></collection>
|
callnumber-first |
T - Technology |
author |
Fan Zhao |
spellingShingle |
Fan Zhao misc TK1-9971 misc Defocus region detection misc direction-context-inspiration network misc level-integration misc Electrical engineering. Electronics. Nuclear engineering Deep Direction-Context-Inspiration Network for Defocus Region Detection in Natural Images |
authorStr |
Fan Zhao |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)728440385 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
TK1-9971 |
illustrated |
Not Illustrated |
issn |
21693536 |
topic_title |
TK1-9971 Deep Direction-Context-Inspiration Network for Defocus Region Detection in Natural Images Defocus region detection direction-context-inspiration network level-integration |
topic |
misc TK1-9971 misc Defocus region detection misc direction-context-inspiration network misc level-integration misc Electrical engineering. Electronics. Nuclear engineering |
topic_unstemmed |
misc TK1-9971 misc Defocus region detection misc direction-context-inspiration network misc level-integration misc Electrical engineering. Electronics. Nuclear engineering |
topic_browse |
misc TK1-9971 misc Defocus region detection misc direction-context-inspiration network misc level-integration misc Electrical engineering. Electronics. Nuclear engineering |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
IEEE Access |
hierarchy_parent_id |
728440385 |
hierarchy_top_title |
IEEE Access |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)728440385 (DE-600)2687964-5 |
title |
Deep Direction-Context-Inspiration Network for Defocus Region Detection in Natural Images |
ctrlnum |
(DE-627)DOAJ051963744 (DE-599)DOAJ1f36b1e88d434a2d8cc044e8eef8d518 |
title_full |
Deep Direction-Context-Inspiration Network for Defocus Region Detection in Natural Images |
author_sort |
Fan Zhao |
journal |
IEEE Access |
journalStr |
IEEE Access |
callnumber-first-code |
T |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2019 |
contenttype_str_mv |
txt |
container_start_page |
64737 |
author_browse |
Fan Zhao Haipeng Wang Wenda Zhao |
container_volume |
7 |
class |
TK1-9971 |
format_se |
Elektronische Aufsätze |
author-letter |
Fan Zhao |
doi_str_mv |
10.1109/ACCESS.2019.2916332 |
author2-role |
verfasserin |
title_sort |
deep direction-context-inspiration network for defocus region detection in natural images |
callnumber |
TK1-9971 |
title_auth |
Deep Direction-Context-Inspiration Network for Defocus Region Detection in Natural Images |
abstract |
Defocus region detection (DRD) problem aims to assign per-pixel predictions of focus clear areas and defocus blur areas. One of the challenges in this problem is to accurately detect the boundary of the transition region between the focus and defocus regions. To address this issue, the paper proposes a direction-context-inspiration network (DCINet), which can take advantage of the directional context effectively. First, we extract directional context by recurrent neural networks initialized with the identity matrix (IRNN) to weight the feature maps and integrate them in the two-group integration method, which can produce the coarse DRD maps. Second, the maps are level-integrated with the source image guiding and the coarse maps are refined gradually. The overall DCINet can integrate low-level details and high-level semantics efficiently. The Experimental results demonstrate that the network can detect the boundary of the transition region precisely, achieving the state-of-the-art performance. |
abstractGer |
Defocus region detection (DRD) problem aims to assign per-pixel predictions of focus clear areas and defocus blur areas. One of the challenges in this problem is to accurately detect the boundary of the transition region between the focus and defocus regions. To address this issue, the paper proposes a direction-context-inspiration network (DCINet), which can take advantage of the directional context effectively. First, we extract directional context by recurrent neural networks initialized with the identity matrix (IRNN) to weight the feature maps and integrate them in the two-group integration method, which can produce the coarse DRD maps. Second, the maps are level-integrated with the source image guiding and the coarse maps are refined gradually. The overall DCINet can integrate low-level details and high-level semantics efficiently. The Experimental results demonstrate that the network can detect the boundary of the transition region precisely, achieving the state-of-the-art performance. |
abstract_unstemmed |
Defocus region detection (DRD) problem aims to assign per-pixel predictions of focus clear areas and defocus blur areas. One of the challenges in this problem is to accurately detect the boundary of the transition region between the focus and defocus regions. To address this issue, the paper proposes a direction-context-inspiration network (DCINet), which can take advantage of the directional context effectively. First, we extract directional context by recurrent neural networks initialized with the identity matrix (IRNN) to weight the feature maps and integrate them in the two-group integration method, which can produce the coarse DRD maps. Second, the maps are level-integrated with the source image guiding and the coarse maps are refined gradually. The overall DCINet can integrate low-level details and high-level semantics efficiently. The Experimental results demonstrate that the network can detect the boundary of the transition region precisely, achieving the state-of-the-art performance. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
title_short |
Deep Direction-Context-Inspiration Network for Defocus Region Detection in Natural Images |
url |
https://doi.org/10.1109/ACCESS.2019.2916332 https://doaj.org/article/1f36b1e88d434a2d8cc044e8eef8d518 https://ieeexplore.ieee.org/document/8713543/ https://doaj.org/toc/2169-3536 |
remote_bool |
true |
author2 |
Haipeng Wang Wenda Zhao |
author2Str |
Haipeng Wang Wenda Zhao |
ppnlink |
728440385 |
callnumber-subject |
TK - Electrical and Nuclear Engineering |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1109/ACCESS.2019.2916332 |
callnumber-a |
TK1-9971 |
up_date |
2024-07-03T23:07:09.760Z |
_version_ |
1803601082854670336 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ051963744</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230308163552.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230227s2019 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1109/ACCESS.2019.2916332</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ051963744</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ1f36b1e88d434a2d8cc044e8eef8d518</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK1-9971</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Fan Zhao</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Deep Direction-Context-Inspiration Network for Defocus Region Detection in Natural Images</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2019</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Defocus region detection (DRD) problem aims to assign per-pixel predictions of focus clear areas and defocus blur areas. One of the challenges in this problem is to accurately detect the boundary of the transition region between the focus and defocus regions. To address this issue, the paper proposes a direction-context-inspiration network (DCINet), which can take advantage of the directional context effectively. First, we extract directional context by recurrent neural networks initialized with the identity matrix (IRNN) to weight the feature maps and integrate them in the two-group integration method, which can produce the coarse DRD maps. Second, the maps are level-integrated with the source image guiding and the coarse maps are refined gradually. The overall DCINet can integrate low-level details and high-level semantics efficiently. The Experimental results demonstrate that the network can detect the boundary of the transition region precisely, achieving the state-of-the-art performance.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Defocus region detection</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">direction-context-inspiration network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">level-integration</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electrical engineering. Electronics. Nuclear engineering</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Haipeng Wang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Wenda Zhao</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IEEE Access</subfield><subfield code="d">IEEE, 2014</subfield><subfield code="g">7(2019), Seite 64737-64743</subfield><subfield code="w">(DE-627)728440385</subfield><subfield code="w">(DE-600)2687964-5</subfield><subfield code="x">21693536</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:7</subfield><subfield code="g">year:2019</subfield><subfield code="g">pages:64737-64743</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1109/ACCESS.2019.2916332</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/1f36b1e88d434a2d8cc044e8eef8d518</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ieeexplore.ieee.org/document/8713543/</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2169-3536</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">7</subfield><subfield code="j">2019</subfield><subfield code="h">64737-64743</subfield></datafield></record></collection>
|
score |
7.400298 |