Boundary-Aware graph Markov neural network for semiautomated object segmentation from point clouds
Due to the advantages of 3D point clouds over 2D optical images, the related researches on scene understanding in 3D point clouds have been increasingly attracting wide attention from academy and industry. However, many 3D scene understanding methods largely require abundant supervised information f...
Ausführliche Beschreibung
Autor*in: |
Huan Luo [verfasserIn] Quan Zheng [verfasserIn] Lina Fang [verfasserIn] Yingya Guo [verfasserIn] Wenzhong Guo [verfasserIn] Cheng Wang [verfasserIn] Jonathan Li [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2021 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
In: International Journal of Applied Earth Observations and Geoinformation - Elsevier, 2022, 104(2021), Seite 102564- |
---|---|
Übergeordnetes Werk: |
volume:104 ; year:2021 ; pages:102564- |
Links: |
---|
DOI / URN: |
10.1016/j.jag.2021.102564 |
---|
Katalog-ID: |
DOAJ029130336 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ029130336 | ||
003 | DE-627 | ||
005 | 20230501184618.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230226s2021 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.jag.2021.102564 |2 doi | |
035 | |a (DE-627)DOAJ029130336 | ||
035 | |a (DE-599)DOAJ88ddb1228eb24c109cc757e7c362f8aa | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a GB3-5030 | |
050 | 0 | |a GE1-350 | |
100 | 0 | |a Huan Luo |e verfasserin |4 aut | |
245 | 1 | 0 | |a Boundary-Aware graph Markov neural network for semiautomated object segmentation from point clouds |
264 | 1 | |c 2021 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Due to the advantages of 3D point clouds over 2D optical images, the related researches on scene understanding in 3D point clouds have been increasingly attracting wide attention from academy and industry. However, many 3D scene understanding methods largely require abundant supervised information for training a data-driven model. The acquisition of such supervised information relies on manual annotations which are laborious and arduous. Therefore, to mitigate such manual efforts for annotating training samples, this paper studies a unified neural network to segment 3D objects out of point clouds interactively. Particularly, to improve the segmentation performance on the accurate object segmentation, the boundary information of 3D objects in point clouds are encoded as a boundary energy term in the Markov Random Field (MRF) model. Moreover, the MRF model with the boundary energy term is naturally integrated with the Graphical Neural Network (GNN) to obtain a compact representation for generating the boundary-preserved 3D objects. The proposed method is evaluated on two point clouds datasets obtained from different types of laser scanning systems, i.e. terrestrial laser scanning system and mobile laser scanning system. Comparative experiments show that the proposed method is superior and effective in 3D objects segmentation in different point-cloud scenarios. | ||
650 | 4 | |a Point Cloud | |
650 | 4 | |a 3D Object Segmentation | |
650 | 4 | |a Boundary Constraint | |
650 | 4 | |a Graph Neural Network | |
650 | 4 | |a Markov Random Field | |
653 | 0 | |a Physical geography | |
653 | 0 | |a Environmental sciences | |
700 | 0 | |a Quan Zheng |e verfasserin |4 aut | |
700 | 0 | |a Lina Fang |e verfasserin |4 aut | |
700 | 0 | |a Yingya Guo |e verfasserin |4 aut | |
700 | 0 | |a Wenzhong Guo |e verfasserin |4 aut | |
700 | 0 | |a Cheng Wang |e verfasserin |4 aut | |
700 | 0 | |a Jonathan Li |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t International Journal of Applied Earth Observations and Geoinformation |d Elsevier, 2022 |g 104(2021), Seite 102564- |w (DE-627)359784119 |w (DE-600)2097960-5 |x 1872826X |7 nnns |
773 | 1 | 8 | |g volume:104 |g year:2021 |g pages:102564- |
856 | 4 | 0 | |u https://doi.org/10.1016/j.jag.2021.102564 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/88ddb1228eb24c109cc757e7c362f8aa |z kostenfrei |
856 | 4 | 0 | |u http://www.sciencedirect.com/science/article/pii/S0303243421002713 |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/1569-8432 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a SSG-OLC-PHA | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2008 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 104 |j 2021 |h 102564- |
author_variant |
h l hl q z qz l f lf y g yg w g wg c w cw j l jl |
---|---|
matchkey_str |
article:1872826X:2021----::onaywrgahakverlewrfreiuoaeojcsg |
hierarchy_sort_str |
2021 |
callnumber-subject-code |
GB |
publishDate |
2021 |
allfields |
10.1016/j.jag.2021.102564 doi (DE-627)DOAJ029130336 (DE-599)DOAJ88ddb1228eb24c109cc757e7c362f8aa DE-627 ger DE-627 rakwb eng GB3-5030 GE1-350 Huan Luo verfasserin aut Boundary-Aware graph Markov neural network for semiautomated object segmentation from point clouds 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Due to the advantages of 3D point clouds over 2D optical images, the related researches on scene understanding in 3D point clouds have been increasingly attracting wide attention from academy and industry. However, many 3D scene understanding methods largely require abundant supervised information for training a data-driven model. The acquisition of such supervised information relies on manual annotations which are laborious and arduous. Therefore, to mitigate such manual efforts for annotating training samples, this paper studies a unified neural network to segment 3D objects out of point clouds interactively. Particularly, to improve the segmentation performance on the accurate object segmentation, the boundary information of 3D objects in point clouds are encoded as a boundary energy term in the Markov Random Field (MRF) model. Moreover, the MRF model with the boundary energy term is naturally integrated with the Graphical Neural Network (GNN) to obtain a compact representation for generating the boundary-preserved 3D objects. The proposed method is evaluated on two point clouds datasets obtained from different types of laser scanning systems, i.e. terrestrial laser scanning system and mobile laser scanning system. Comparative experiments show that the proposed method is superior and effective in 3D objects segmentation in different point-cloud scenarios. Point Cloud 3D Object Segmentation Boundary Constraint Graph Neural Network Markov Random Field Physical geography Environmental sciences Quan Zheng verfasserin aut Lina Fang verfasserin aut Yingya Guo verfasserin aut Wenzhong Guo verfasserin aut Cheng Wang verfasserin aut Jonathan Li verfasserin aut In International Journal of Applied Earth Observations and Geoinformation Elsevier, 2022 104(2021), Seite 102564- (DE-627)359784119 (DE-600)2097960-5 1872826X nnns volume:104 year:2021 pages:102564- https://doi.org/10.1016/j.jag.2021.102564 kostenfrei https://doaj.org/article/88ddb1228eb24c109cc757e7c362f8aa kostenfrei http://www.sciencedirect.com/science/article/pii/S0303243421002713 kostenfrei https://doaj.org/toc/1569-8432 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2008 GBV_ILN_2014 GBV_ILN_2025 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 104 2021 102564- |
spelling |
10.1016/j.jag.2021.102564 doi (DE-627)DOAJ029130336 (DE-599)DOAJ88ddb1228eb24c109cc757e7c362f8aa DE-627 ger DE-627 rakwb eng GB3-5030 GE1-350 Huan Luo verfasserin aut Boundary-Aware graph Markov neural network for semiautomated object segmentation from point clouds 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Due to the advantages of 3D point clouds over 2D optical images, the related researches on scene understanding in 3D point clouds have been increasingly attracting wide attention from academy and industry. However, many 3D scene understanding methods largely require abundant supervised information for training a data-driven model. The acquisition of such supervised information relies on manual annotations which are laborious and arduous. Therefore, to mitigate such manual efforts for annotating training samples, this paper studies a unified neural network to segment 3D objects out of point clouds interactively. Particularly, to improve the segmentation performance on the accurate object segmentation, the boundary information of 3D objects in point clouds are encoded as a boundary energy term in the Markov Random Field (MRF) model. Moreover, the MRF model with the boundary energy term is naturally integrated with the Graphical Neural Network (GNN) to obtain a compact representation for generating the boundary-preserved 3D objects. The proposed method is evaluated on two point clouds datasets obtained from different types of laser scanning systems, i.e. terrestrial laser scanning system and mobile laser scanning system. Comparative experiments show that the proposed method is superior and effective in 3D objects segmentation in different point-cloud scenarios. Point Cloud 3D Object Segmentation Boundary Constraint Graph Neural Network Markov Random Field Physical geography Environmental sciences Quan Zheng verfasserin aut Lina Fang verfasserin aut Yingya Guo verfasserin aut Wenzhong Guo verfasserin aut Cheng Wang verfasserin aut Jonathan Li verfasserin aut In International Journal of Applied Earth Observations and Geoinformation Elsevier, 2022 104(2021), Seite 102564- (DE-627)359784119 (DE-600)2097960-5 1872826X nnns volume:104 year:2021 pages:102564- https://doi.org/10.1016/j.jag.2021.102564 kostenfrei https://doaj.org/article/88ddb1228eb24c109cc757e7c362f8aa kostenfrei http://www.sciencedirect.com/science/article/pii/S0303243421002713 kostenfrei https://doaj.org/toc/1569-8432 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2008 GBV_ILN_2014 GBV_ILN_2025 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 104 2021 102564- |
allfields_unstemmed |
10.1016/j.jag.2021.102564 doi (DE-627)DOAJ029130336 (DE-599)DOAJ88ddb1228eb24c109cc757e7c362f8aa DE-627 ger DE-627 rakwb eng GB3-5030 GE1-350 Huan Luo verfasserin aut Boundary-Aware graph Markov neural network for semiautomated object segmentation from point clouds 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Due to the advantages of 3D point clouds over 2D optical images, the related researches on scene understanding in 3D point clouds have been increasingly attracting wide attention from academy and industry. However, many 3D scene understanding methods largely require abundant supervised information for training a data-driven model. The acquisition of such supervised information relies on manual annotations which are laborious and arduous. Therefore, to mitigate such manual efforts for annotating training samples, this paper studies a unified neural network to segment 3D objects out of point clouds interactively. Particularly, to improve the segmentation performance on the accurate object segmentation, the boundary information of 3D objects in point clouds are encoded as a boundary energy term in the Markov Random Field (MRF) model. Moreover, the MRF model with the boundary energy term is naturally integrated with the Graphical Neural Network (GNN) to obtain a compact representation for generating the boundary-preserved 3D objects. The proposed method is evaluated on two point clouds datasets obtained from different types of laser scanning systems, i.e. terrestrial laser scanning system and mobile laser scanning system. Comparative experiments show that the proposed method is superior and effective in 3D objects segmentation in different point-cloud scenarios. Point Cloud 3D Object Segmentation Boundary Constraint Graph Neural Network Markov Random Field Physical geography Environmental sciences Quan Zheng verfasserin aut Lina Fang verfasserin aut Yingya Guo verfasserin aut Wenzhong Guo verfasserin aut Cheng Wang verfasserin aut Jonathan Li verfasserin aut In International Journal of Applied Earth Observations and Geoinformation Elsevier, 2022 104(2021), Seite 102564- (DE-627)359784119 (DE-600)2097960-5 1872826X nnns volume:104 year:2021 pages:102564- https://doi.org/10.1016/j.jag.2021.102564 kostenfrei https://doaj.org/article/88ddb1228eb24c109cc757e7c362f8aa kostenfrei http://www.sciencedirect.com/science/article/pii/S0303243421002713 kostenfrei https://doaj.org/toc/1569-8432 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2008 GBV_ILN_2014 GBV_ILN_2025 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 104 2021 102564- |
allfieldsGer |
10.1016/j.jag.2021.102564 doi (DE-627)DOAJ029130336 (DE-599)DOAJ88ddb1228eb24c109cc757e7c362f8aa DE-627 ger DE-627 rakwb eng GB3-5030 GE1-350 Huan Luo verfasserin aut Boundary-Aware graph Markov neural network for semiautomated object segmentation from point clouds 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Due to the advantages of 3D point clouds over 2D optical images, the related researches on scene understanding in 3D point clouds have been increasingly attracting wide attention from academy and industry. However, many 3D scene understanding methods largely require abundant supervised information for training a data-driven model. The acquisition of such supervised information relies on manual annotations which are laborious and arduous. Therefore, to mitigate such manual efforts for annotating training samples, this paper studies a unified neural network to segment 3D objects out of point clouds interactively. Particularly, to improve the segmentation performance on the accurate object segmentation, the boundary information of 3D objects in point clouds are encoded as a boundary energy term in the Markov Random Field (MRF) model. Moreover, the MRF model with the boundary energy term is naturally integrated with the Graphical Neural Network (GNN) to obtain a compact representation for generating the boundary-preserved 3D objects. The proposed method is evaluated on two point clouds datasets obtained from different types of laser scanning systems, i.e. terrestrial laser scanning system and mobile laser scanning system. Comparative experiments show that the proposed method is superior and effective in 3D objects segmentation in different point-cloud scenarios. Point Cloud 3D Object Segmentation Boundary Constraint Graph Neural Network Markov Random Field Physical geography Environmental sciences Quan Zheng verfasserin aut Lina Fang verfasserin aut Yingya Guo verfasserin aut Wenzhong Guo verfasserin aut Cheng Wang verfasserin aut Jonathan Li verfasserin aut In International Journal of Applied Earth Observations and Geoinformation Elsevier, 2022 104(2021), Seite 102564- (DE-627)359784119 (DE-600)2097960-5 1872826X nnns volume:104 year:2021 pages:102564- https://doi.org/10.1016/j.jag.2021.102564 kostenfrei https://doaj.org/article/88ddb1228eb24c109cc757e7c362f8aa kostenfrei http://www.sciencedirect.com/science/article/pii/S0303243421002713 kostenfrei https://doaj.org/toc/1569-8432 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2008 GBV_ILN_2014 GBV_ILN_2025 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 104 2021 102564- |
allfieldsSound |
10.1016/j.jag.2021.102564 doi (DE-627)DOAJ029130336 (DE-599)DOAJ88ddb1228eb24c109cc757e7c362f8aa DE-627 ger DE-627 rakwb eng GB3-5030 GE1-350 Huan Luo verfasserin aut Boundary-Aware graph Markov neural network for semiautomated object segmentation from point clouds 2021 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Due to the advantages of 3D point clouds over 2D optical images, the related researches on scene understanding in 3D point clouds have been increasingly attracting wide attention from academy and industry. However, many 3D scene understanding methods largely require abundant supervised information for training a data-driven model. The acquisition of such supervised information relies on manual annotations which are laborious and arduous. Therefore, to mitigate such manual efforts for annotating training samples, this paper studies a unified neural network to segment 3D objects out of point clouds interactively. Particularly, to improve the segmentation performance on the accurate object segmentation, the boundary information of 3D objects in point clouds are encoded as a boundary energy term in the Markov Random Field (MRF) model. Moreover, the MRF model with the boundary energy term is naturally integrated with the Graphical Neural Network (GNN) to obtain a compact representation for generating the boundary-preserved 3D objects. The proposed method is evaluated on two point clouds datasets obtained from different types of laser scanning systems, i.e. terrestrial laser scanning system and mobile laser scanning system. Comparative experiments show that the proposed method is superior and effective in 3D objects segmentation in different point-cloud scenarios. Point Cloud 3D Object Segmentation Boundary Constraint Graph Neural Network Markov Random Field Physical geography Environmental sciences Quan Zheng verfasserin aut Lina Fang verfasserin aut Yingya Guo verfasserin aut Wenzhong Guo verfasserin aut Cheng Wang verfasserin aut Jonathan Li verfasserin aut In International Journal of Applied Earth Observations and Geoinformation Elsevier, 2022 104(2021), Seite 102564- (DE-627)359784119 (DE-600)2097960-5 1872826X nnns volume:104 year:2021 pages:102564- https://doi.org/10.1016/j.jag.2021.102564 kostenfrei https://doaj.org/article/88ddb1228eb24c109cc757e7c362f8aa kostenfrei http://www.sciencedirect.com/science/article/pii/S0303243421002713 kostenfrei https://doaj.org/toc/1569-8432 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2008 GBV_ILN_2014 GBV_ILN_2025 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 104 2021 102564- |
language |
English |
source |
In International Journal of Applied Earth Observations and Geoinformation 104(2021), Seite 102564- volume:104 year:2021 pages:102564- |
sourceStr |
In International Journal of Applied Earth Observations and Geoinformation 104(2021), Seite 102564- volume:104 year:2021 pages:102564- |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Point Cloud 3D Object Segmentation Boundary Constraint Graph Neural Network Markov Random Field Physical geography Environmental sciences |
isfreeaccess_bool |
true |
container_title |
International Journal of Applied Earth Observations and Geoinformation |
authorswithroles_txt_mv |
Huan Luo @@aut@@ Quan Zheng @@aut@@ Lina Fang @@aut@@ Yingya Guo @@aut@@ Wenzhong Guo @@aut@@ Cheng Wang @@aut@@ Jonathan Li @@aut@@ |
publishDateDaySort_date |
2021-01-01T00:00:00Z |
hierarchy_top_id |
359784119 |
id |
DOAJ029130336 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ029130336</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230501184618.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230226s2021 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.jag.2021.102564</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ029130336</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ88ddb1228eb24c109cc757e7c362f8aa</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">GB3-5030</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">GE1-350</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Huan Luo</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Boundary-Aware graph Markov neural network for semiautomated object segmentation from point clouds</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2021</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Due to the advantages of 3D point clouds over 2D optical images, the related researches on scene understanding in 3D point clouds have been increasingly attracting wide attention from academy and industry. However, many 3D scene understanding methods largely require abundant supervised information for training a data-driven model. The acquisition of such supervised information relies on manual annotations which are laborious and arduous. Therefore, to mitigate such manual efforts for annotating training samples, this paper studies a unified neural network to segment 3D objects out of point clouds interactively. Particularly, to improve the segmentation performance on the accurate object segmentation, the boundary information of 3D objects in point clouds are encoded as a boundary energy term in the Markov Random Field (MRF) model. Moreover, the MRF model with the boundary energy term is naturally integrated with the Graphical Neural Network (GNN) to obtain a compact representation for generating the boundary-preserved 3D objects. The proposed method is evaluated on two point clouds datasets obtained from different types of laser scanning systems, i.e. terrestrial laser scanning system and mobile laser scanning system. Comparative experiments show that the proposed method is superior and effective in 3D objects segmentation in different point-cloud scenarios.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Point Cloud</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">3D Object Segmentation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Boundary Constraint</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Graph Neural Network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Markov Random Field</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Physical geography</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Environmental sciences</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Quan Zheng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Lina Fang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yingya Guo</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Wenzhong Guo</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Cheng Wang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Jonathan Li</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">International Journal of Applied Earth Observations and Geoinformation</subfield><subfield code="d">Elsevier, 2022</subfield><subfield code="g">104(2021), Seite 102564-</subfield><subfield code="w">(DE-627)359784119</subfield><subfield code="w">(DE-600)2097960-5</subfield><subfield code="x">1872826X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:104</subfield><subfield code="g">year:2021</subfield><subfield code="g">pages:102564-</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.jag.2021.102564</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/88ddb1228eb24c109cc757e7c362f8aa</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">http://www.sciencedirect.com/science/article/pii/S0303243421002713</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1569-8432</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">104</subfield><subfield code="j">2021</subfield><subfield code="h">102564-</subfield></datafield></record></collection>
|
callnumber-first |
G - Geography, Anthropology, Recreation |
author |
Huan Luo |
spellingShingle |
Huan Luo misc GB3-5030 misc GE1-350 misc Point Cloud misc 3D Object Segmentation misc Boundary Constraint misc Graph Neural Network misc Markov Random Field misc Physical geography misc Environmental sciences Boundary-Aware graph Markov neural network for semiautomated object segmentation from point clouds |
authorStr |
Huan Luo |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)359784119 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
GB3-5030 |
illustrated |
Not Illustrated |
issn |
1872826X |
topic_title |
GB3-5030 GE1-350 Boundary-Aware graph Markov neural network for semiautomated object segmentation from point clouds Point Cloud 3D Object Segmentation Boundary Constraint Graph Neural Network Markov Random Field |
topic |
misc GB3-5030 misc GE1-350 misc Point Cloud misc 3D Object Segmentation misc Boundary Constraint misc Graph Neural Network misc Markov Random Field misc Physical geography misc Environmental sciences |
topic_unstemmed |
misc GB3-5030 misc GE1-350 misc Point Cloud misc 3D Object Segmentation misc Boundary Constraint misc Graph Neural Network misc Markov Random Field misc Physical geography misc Environmental sciences |
topic_browse |
misc GB3-5030 misc GE1-350 misc Point Cloud misc 3D Object Segmentation misc Boundary Constraint misc Graph Neural Network misc Markov Random Field misc Physical geography misc Environmental sciences |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
International Journal of Applied Earth Observations and Geoinformation |
hierarchy_parent_id |
359784119 |
hierarchy_top_title |
International Journal of Applied Earth Observations and Geoinformation |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)359784119 (DE-600)2097960-5 |
title |
Boundary-Aware graph Markov neural network for semiautomated object segmentation from point clouds |
ctrlnum |
(DE-627)DOAJ029130336 (DE-599)DOAJ88ddb1228eb24c109cc757e7c362f8aa |
title_full |
Boundary-Aware graph Markov neural network for semiautomated object segmentation from point clouds |
author_sort |
Huan Luo |
journal |
International Journal of Applied Earth Observations and Geoinformation |
journalStr |
International Journal of Applied Earth Observations and Geoinformation |
callnumber-first-code |
G |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2021 |
contenttype_str_mv |
txt |
container_start_page |
102564 |
author_browse |
Huan Luo Quan Zheng Lina Fang Yingya Guo Wenzhong Guo Cheng Wang Jonathan Li |
container_volume |
104 |
class |
GB3-5030 GE1-350 |
format_se |
Elektronische Aufsätze |
author-letter |
Huan Luo |
doi_str_mv |
10.1016/j.jag.2021.102564 |
author2-role |
verfasserin |
title_sort |
boundary-aware graph markov neural network for semiautomated object segmentation from point clouds |
callnumber |
GB3-5030 |
title_auth |
Boundary-Aware graph Markov neural network for semiautomated object segmentation from point clouds |
abstract |
Due to the advantages of 3D point clouds over 2D optical images, the related researches on scene understanding in 3D point clouds have been increasingly attracting wide attention from academy and industry. However, many 3D scene understanding methods largely require abundant supervised information for training a data-driven model. The acquisition of such supervised information relies on manual annotations which are laborious and arduous. Therefore, to mitigate such manual efforts for annotating training samples, this paper studies a unified neural network to segment 3D objects out of point clouds interactively. Particularly, to improve the segmentation performance on the accurate object segmentation, the boundary information of 3D objects in point clouds are encoded as a boundary energy term in the Markov Random Field (MRF) model. Moreover, the MRF model with the boundary energy term is naturally integrated with the Graphical Neural Network (GNN) to obtain a compact representation for generating the boundary-preserved 3D objects. The proposed method is evaluated on two point clouds datasets obtained from different types of laser scanning systems, i.e. terrestrial laser scanning system and mobile laser scanning system. Comparative experiments show that the proposed method is superior and effective in 3D objects segmentation in different point-cloud scenarios. |
abstractGer |
Due to the advantages of 3D point clouds over 2D optical images, the related researches on scene understanding in 3D point clouds have been increasingly attracting wide attention from academy and industry. However, many 3D scene understanding methods largely require abundant supervised information for training a data-driven model. The acquisition of such supervised information relies on manual annotations which are laborious and arduous. Therefore, to mitigate such manual efforts for annotating training samples, this paper studies a unified neural network to segment 3D objects out of point clouds interactively. Particularly, to improve the segmentation performance on the accurate object segmentation, the boundary information of 3D objects in point clouds are encoded as a boundary energy term in the Markov Random Field (MRF) model. Moreover, the MRF model with the boundary energy term is naturally integrated with the Graphical Neural Network (GNN) to obtain a compact representation for generating the boundary-preserved 3D objects. The proposed method is evaluated on two point clouds datasets obtained from different types of laser scanning systems, i.e. terrestrial laser scanning system and mobile laser scanning system. Comparative experiments show that the proposed method is superior and effective in 3D objects segmentation in different point-cloud scenarios. |
abstract_unstemmed |
Due to the advantages of 3D point clouds over 2D optical images, the related researches on scene understanding in 3D point clouds have been increasingly attracting wide attention from academy and industry. However, many 3D scene understanding methods largely require abundant supervised information for training a data-driven model. The acquisition of such supervised information relies on manual annotations which are laborious and arduous. Therefore, to mitigate such manual efforts for annotating training samples, this paper studies a unified neural network to segment 3D objects out of point clouds interactively. Particularly, to improve the segmentation performance on the accurate object segmentation, the boundary information of 3D objects in point clouds are encoded as a boundary energy term in the Markov Random Field (MRF) model. Moreover, the MRF model with the boundary energy term is naturally integrated with the Graphical Neural Network (GNN) to obtain a compact representation for generating the boundary-preserved 3D objects. The proposed method is evaluated on two point clouds datasets obtained from different types of laser scanning systems, i.e. terrestrial laser scanning system and mobile laser scanning system. Comparative experiments show that the proposed method is superior and effective in 3D objects segmentation in different point-cloud scenarios. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ SSG-OLC-PHA GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2008 GBV_ILN_2014 GBV_ILN_2025 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
title_short |
Boundary-Aware graph Markov neural network for semiautomated object segmentation from point clouds |
url |
https://doi.org/10.1016/j.jag.2021.102564 https://doaj.org/article/88ddb1228eb24c109cc757e7c362f8aa http://www.sciencedirect.com/science/article/pii/S0303243421002713 https://doaj.org/toc/1569-8432 |
remote_bool |
true |
author2 |
Quan Zheng Lina Fang Yingya Guo Wenzhong Guo Cheng Wang Jonathan Li |
author2Str |
Quan Zheng Lina Fang Yingya Guo Wenzhong Guo Cheng Wang Jonathan Li |
ppnlink |
359784119 |
callnumber-subject |
GB - Physical Geography |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1016/j.jag.2021.102564 |
callnumber-a |
GB3-5030 |
up_date |
2024-07-03T21:19:15.535Z |
_version_ |
1803594294138765312 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ029130336</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230501184618.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230226s2021 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.jag.2021.102564</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ029130336</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ88ddb1228eb24c109cc757e7c362f8aa</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">GB3-5030</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">GE1-350</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Huan Luo</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Boundary-Aware graph Markov neural network for semiautomated object segmentation from point clouds</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2021</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Due to the advantages of 3D point clouds over 2D optical images, the related researches on scene understanding in 3D point clouds have been increasingly attracting wide attention from academy and industry. However, many 3D scene understanding methods largely require abundant supervised information for training a data-driven model. The acquisition of such supervised information relies on manual annotations which are laborious and arduous. Therefore, to mitigate such manual efforts for annotating training samples, this paper studies a unified neural network to segment 3D objects out of point clouds interactively. Particularly, to improve the segmentation performance on the accurate object segmentation, the boundary information of 3D objects in point clouds are encoded as a boundary energy term in the Markov Random Field (MRF) model. Moreover, the MRF model with the boundary energy term is naturally integrated with the Graphical Neural Network (GNN) to obtain a compact representation for generating the boundary-preserved 3D objects. The proposed method is evaluated on two point clouds datasets obtained from different types of laser scanning systems, i.e. terrestrial laser scanning system and mobile laser scanning system. Comparative experiments show that the proposed method is superior and effective in 3D objects segmentation in different point-cloud scenarios.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Point Cloud</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">3D Object Segmentation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Boundary Constraint</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Graph Neural Network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Markov Random Field</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Physical geography</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Environmental sciences</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Quan Zheng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Lina Fang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Yingya Guo</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Wenzhong Guo</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Cheng Wang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Jonathan Li</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">International Journal of Applied Earth Observations and Geoinformation</subfield><subfield code="d">Elsevier, 2022</subfield><subfield code="g">104(2021), Seite 102564-</subfield><subfield code="w">(DE-627)359784119</subfield><subfield code="w">(DE-600)2097960-5</subfield><subfield code="x">1872826X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:104</subfield><subfield code="g">year:2021</subfield><subfield code="g">pages:102564-</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.jag.2021.102564</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/88ddb1228eb24c109cc757e7c362f8aa</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">http://www.sciencedirect.com/science/article/pii/S0303243421002713</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1569-8432</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">104</subfield><subfield code="j">2021</subfield><subfield code="h">102564-</subfield></datafield></record></collection>
|
score |
7.402647 |