GRNet: Geometric relation network for 3D object detection from point clouds
Rapid detection of 3D objects in indoor environments is essential for indoor mapping and modeling, robotic perception and localization, and building reconstruction. 3D point clouds acquired by a low-cost RGB-D camera have become one of the most commonly used data sources for 3D indoor mapping. Howev...
Ausführliche Beschreibung
Autor*in: |
Li, Ying [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2020transfer abstract |
---|
Schlagwörter: |
---|
Umfang: |
11 |
---|
Übergeordnetes Werk: |
Enthalten in: In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid - Skiadopoulos, V. ELSEVIER, 2013, official publication of the International Society for Photogrammetry and Remote Sensing (ISPRS), Amsterdam [u.a.] |
---|---|
Übergeordnetes Werk: |
volume:165 ; year:2020 ; pages:43-53 ; extent:11 |
Links: |
---|
DOI / URN: |
10.1016/j.isprsjprs.2020.05.008 |
---|
Katalog-ID: |
ELV050574523 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV050574523 | ||
003 | DE-627 | ||
005 | 20230626030812.0 | ||
007 | cr uuu---uuuuu | ||
008 | 200625s2020 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.isprsjprs.2020.05.008 |2 doi | |
028 | 5 | 2 | |a /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001076.pica |
035 | |a (DE-627)ELV050574523 | ||
035 | |a (ELSEVIER)S0924-2716(20)30128-3 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 570 |q VZ |
082 | 0 | 4 | |a 610 |q VZ |
082 | 0 | 4 | |a 620 |q VZ |
084 | |a 52.57 |2 bkl | ||
084 | |a 53.36 |2 bkl | ||
100 | 1 | |a Li, Ying |e verfasserin |4 aut | |
245 | 1 | 0 | |a GRNet: Geometric relation network for 3D object detection from point clouds |
264 | 1 | |c 2020transfer abstract | |
300 | |a 11 | ||
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a nicht spezifiziert |b z |2 rdamedia | ||
338 | |a nicht spezifiziert |b zu |2 rdacarrier | ||
520 | |a Rapid detection of 3D objects in indoor environments is essential for indoor mapping and modeling, robotic perception and localization, and building reconstruction. 3D point clouds acquired by a low-cost RGB-D camera have become one of the most commonly used data sources for 3D indoor mapping. However, due to the sparse surface, empty object center, and various scales of point cloud objects, 3D bounding boxes are challenging to be estimated and located accurately. To address this, geometric shape, topological structure, and object relation are commonly employed to extract box reasoning information. In this paper, we describe the geometric feature among object points as an intra-object feature and the relation feature between different objects as an inter-object feature. Based on these two features, we propose an end-to-end point cloud geometric relation network focusing on 3D object detection, which is termed as geometric relation network (GRNet). GRNet first extracts intra-object and inter-object features for each representative point using our proposed backbone network. Then, a centralization module with a scalable loss function is proposed to centralize each representative object point to its center. Next, proposal points are sampled from these shifted points, following a proposal feature pooling operation. Finally, an object-relation learning module is applied to predict bounding box parameters. Such parameters are the additive sum of prediction results from the relation-based inter-object feature and the aggregated intra-object feature. Our model achieves state-of-the-art 3D detection results with 59.1% mAP0.25 and 39.1% mAP@0.5 on ScanNetV2 dataset, 58.4% mAP@0.25 and 34.9% mAP@0.5 on SUN RGB-D dataset. | ||
520 | |a Rapid detection of 3D objects in indoor environments is essential for indoor mapping and modeling, robotic perception and localization, and building reconstruction. 3D point clouds acquired by a low-cost RGB-D camera have become one of the most commonly used data sources for 3D indoor mapping. However, due to the sparse surface, empty object center, and various scales of point cloud objects, 3D bounding boxes are challenging to be estimated and located accurately. To address this, geometric shape, topological structure, and object relation are commonly employed to extract box reasoning information. In this paper, we describe the geometric feature among object points as an intra-object feature and the relation feature between different objects as an inter-object feature. Based on these two features, we propose an end-to-end point cloud geometric relation network focusing on 3D object detection, which is termed as geometric relation network (GRNet). GRNet first extracts intra-object and inter-object features for each representative point using our proposed backbone network. Then, a centralization module with a scalable loss function is proposed to centralize each representative object point to its center. Next, proposal points are sampled from these shifted points, following a proposal feature pooling operation. Finally, an object-relation learning module is applied to predict bounding box parameters. Such parameters are the additive sum of prediction results from the relation-based inter-object feature and the aggregated intra-object feature. Our model achieves state-of-the-art 3D detection results with 59.1% mAP0.25 and 39.1% mAP@0.5 on ScanNetV2 dataset, 58.4% mAP@0.25 and 34.9% mAP@0.5 on SUN RGB-D dataset. | ||
650 | 7 | |a Deep learning |2 Elsevier | |
650 | 7 | |a RGB-D |2 Elsevier | |
650 | 7 | |a Point cloud |2 Elsevier | |
650 | 7 | |a Geometric relation |2 Elsevier | |
650 | 7 | |a Indoor mapping |2 Elsevier | |
650 | 7 | |a 3D object detection |2 Elsevier | |
700 | 1 | |a Ma, Lingfei |4 oth | |
700 | 1 | |a Tan, Weikai |4 oth | |
700 | 1 | |a Sun, Chen |4 oth | |
700 | 1 | |a Cao, Dongpu |4 oth | |
700 | 1 | |a Li, Jonathan |4 oth | |
773 | 0 | 8 | |i Enthalten in |n Elsevier |a Skiadopoulos, V. ELSEVIER |t In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid |d 2013 |d official publication of the International Society for Photogrammetry and Remote Sensing (ISPRS) |g Amsterdam [u.a.] |w (DE-627)ELV016966376 |
773 | 1 | 8 | |g volume:165 |g year:2020 |g pages:43-53 |g extent:11 |
856 | 4 | 0 | |u https://doi.org/10.1016/j.isprsjprs.2020.05.008 |3 Volltext |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SYSFLAG_U | ||
912 | |a GBV_ILN_70 | ||
936 | b | k | |a 52.57 |j Energiespeicherung |q VZ |
936 | b | k | |a 53.36 |j Energiedirektumwandler |j elektrische Energiespeicher |q VZ |
951 | |a AR | ||
952 | |d 165 |j 2020 |h 43-53 |g 11 |
author_variant |
y l yl |
---|---|
matchkey_str |
liyingmalingfeitanweikaisunchencaodongpu:2020----:regoerceainewrfrdbeteeto |
hierarchy_sort_str |
2020transfer abstract |
bklnumber |
52.57 53.36 |
publishDate |
2020 |
allfields |
10.1016/j.isprsjprs.2020.05.008 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001076.pica (DE-627)ELV050574523 (ELSEVIER)S0924-2716(20)30128-3 DE-627 ger DE-627 rakwb eng 570 VZ 610 VZ 620 VZ 52.57 bkl 53.36 bkl Li, Ying verfasserin aut GRNet: Geometric relation network for 3D object detection from point clouds 2020transfer abstract 11 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Rapid detection of 3D objects in indoor environments is essential for indoor mapping and modeling, robotic perception and localization, and building reconstruction. 3D point clouds acquired by a low-cost RGB-D camera have become one of the most commonly used data sources for 3D indoor mapping. However, due to the sparse surface, empty object center, and various scales of point cloud objects, 3D bounding boxes are challenging to be estimated and located accurately. To address this, geometric shape, topological structure, and object relation are commonly employed to extract box reasoning information. In this paper, we describe the geometric feature among object points as an intra-object feature and the relation feature between different objects as an inter-object feature. Based on these two features, we propose an end-to-end point cloud geometric relation network focusing on 3D object detection, which is termed as geometric relation network (GRNet). GRNet first extracts intra-object and inter-object features for each representative point using our proposed backbone network. Then, a centralization module with a scalable loss function is proposed to centralize each representative object point to its center. Next, proposal points are sampled from these shifted points, following a proposal feature pooling operation. Finally, an object-relation learning module is applied to predict bounding box parameters. Such parameters are the additive sum of prediction results from the relation-based inter-object feature and the aggregated intra-object feature. Our model achieves state-of-the-art 3D detection results with 59.1% mAP0.25 and 39.1% mAP@0.5 on ScanNetV2 dataset, 58.4% mAP@0.25 and 34.9% mAP@0.5 on SUN RGB-D dataset. Rapid detection of 3D objects in indoor environments is essential for indoor mapping and modeling, robotic perception and localization, and building reconstruction. 3D point clouds acquired by a low-cost RGB-D camera have become one of the most commonly used data sources for 3D indoor mapping. However, due to the sparse surface, empty object center, and various scales of point cloud objects, 3D bounding boxes are challenging to be estimated and located accurately. To address this, geometric shape, topological structure, and object relation are commonly employed to extract box reasoning information. In this paper, we describe the geometric feature among object points as an intra-object feature and the relation feature between different objects as an inter-object feature. Based on these two features, we propose an end-to-end point cloud geometric relation network focusing on 3D object detection, which is termed as geometric relation network (GRNet). GRNet first extracts intra-object and inter-object features for each representative point using our proposed backbone network. Then, a centralization module with a scalable loss function is proposed to centralize each representative object point to its center. Next, proposal points are sampled from these shifted points, following a proposal feature pooling operation. Finally, an object-relation learning module is applied to predict bounding box parameters. Such parameters are the additive sum of prediction results from the relation-based inter-object feature and the aggregated intra-object feature. Our model achieves state-of-the-art 3D detection results with 59.1% mAP0.25 and 39.1% mAP@0.5 on ScanNetV2 dataset, 58.4% mAP@0.25 and 34.9% mAP@0.5 on SUN RGB-D dataset. Deep learning Elsevier RGB-D Elsevier Point cloud Elsevier Geometric relation Elsevier Indoor mapping Elsevier 3D object detection Elsevier Ma, Lingfei oth Tan, Weikai oth Sun, Chen oth Cao, Dongpu oth Li, Jonathan oth Enthalten in Elsevier Skiadopoulos, V. ELSEVIER In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid 2013 official publication of the International Society for Photogrammetry and Remote Sensing (ISPRS) Amsterdam [u.a.] (DE-627)ELV016966376 volume:165 year:2020 pages:43-53 extent:11 https://doi.org/10.1016/j.isprsjprs.2020.05.008 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_70 52.57 Energiespeicherung VZ 53.36 Energiedirektumwandler elektrische Energiespeicher VZ AR 165 2020 43-53 11 |
spelling |
10.1016/j.isprsjprs.2020.05.008 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001076.pica (DE-627)ELV050574523 (ELSEVIER)S0924-2716(20)30128-3 DE-627 ger DE-627 rakwb eng 570 VZ 610 VZ 620 VZ 52.57 bkl 53.36 bkl Li, Ying verfasserin aut GRNet: Geometric relation network for 3D object detection from point clouds 2020transfer abstract 11 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Rapid detection of 3D objects in indoor environments is essential for indoor mapping and modeling, robotic perception and localization, and building reconstruction. 3D point clouds acquired by a low-cost RGB-D camera have become one of the most commonly used data sources for 3D indoor mapping. However, due to the sparse surface, empty object center, and various scales of point cloud objects, 3D bounding boxes are challenging to be estimated and located accurately. To address this, geometric shape, topological structure, and object relation are commonly employed to extract box reasoning information. In this paper, we describe the geometric feature among object points as an intra-object feature and the relation feature between different objects as an inter-object feature. Based on these two features, we propose an end-to-end point cloud geometric relation network focusing on 3D object detection, which is termed as geometric relation network (GRNet). GRNet first extracts intra-object and inter-object features for each representative point using our proposed backbone network. Then, a centralization module with a scalable loss function is proposed to centralize each representative object point to its center. Next, proposal points are sampled from these shifted points, following a proposal feature pooling operation. Finally, an object-relation learning module is applied to predict bounding box parameters. Such parameters are the additive sum of prediction results from the relation-based inter-object feature and the aggregated intra-object feature. Our model achieves state-of-the-art 3D detection results with 59.1% mAP0.25 and 39.1% mAP@0.5 on ScanNetV2 dataset, 58.4% mAP@0.25 and 34.9% mAP@0.5 on SUN RGB-D dataset. Rapid detection of 3D objects in indoor environments is essential for indoor mapping and modeling, robotic perception and localization, and building reconstruction. 3D point clouds acquired by a low-cost RGB-D camera have become one of the most commonly used data sources for 3D indoor mapping. However, due to the sparse surface, empty object center, and various scales of point cloud objects, 3D bounding boxes are challenging to be estimated and located accurately. To address this, geometric shape, topological structure, and object relation are commonly employed to extract box reasoning information. In this paper, we describe the geometric feature among object points as an intra-object feature and the relation feature between different objects as an inter-object feature. Based on these two features, we propose an end-to-end point cloud geometric relation network focusing on 3D object detection, which is termed as geometric relation network (GRNet). GRNet first extracts intra-object and inter-object features for each representative point using our proposed backbone network. Then, a centralization module with a scalable loss function is proposed to centralize each representative object point to its center. Next, proposal points are sampled from these shifted points, following a proposal feature pooling operation. Finally, an object-relation learning module is applied to predict bounding box parameters. Such parameters are the additive sum of prediction results from the relation-based inter-object feature and the aggregated intra-object feature. Our model achieves state-of-the-art 3D detection results with 59.1% mAP0.25 and 39.1% mAP@0.5 on ScanNetV2 dataset, 58.4% mAP@0.25 and 34.9% mAP@0.5 on SUN RGB-D dataset. Deep learning Elsevier RGB-D Elsevier Point cloud Elsevier Geometric relation Elsevier Indoor mapping Elsevier 3D object detection Elsevier Ma, Lingfei oth Tan, Weikai oth Sun, Chen oth Cao, Dongpu oth Li, Jonathan oth Enthalten in Elsevier Skiadopoulos, V. ELSEVIER In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid 2013 official publication of the International Society for Photogrammetry and Remote Sensing (ISPRS) Amsterdam [u.a.] (DE-627)ELV016966376 volume:165 year:2020 pages:43-53 extent:11 https://doi.org/10.1016/j.isprsjprs.2020.05.008 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_70 52.57 Energiespeicherung VZ 53.36 Energiedirektumwandler elektrische Energiespeicher VZ AR 165 2020 43-53 11 |
allfields_unstemmed |
10.1016/j.isprsjprs.2020.05.008 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001076.pica (DE-627)ELV050574523 (ELSEVIER)S0924-2716(20)30128-3 DE-627 ger DE-627 rakwb eng 570 VZ 610 VZ 620 VZ 52.57 bkl 53.36 bkl Li, Ying verfasserin aut GRNet: Geometric relation network for 3D object detection from point clouds 2020transfer abstract 11 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Rapid detection of 3D objects in indoor environments is essential for indoor mapping and modeling, robotic perception and localization, and building reconstruction. 3D point clouds acquired by a low-cost RGB-D camera have become one of the most commonly used data sources for 3D indoor mapping. However, due to the sparse surface, empty object center, and various scales of point cloud objects, 3D bounding boxes are challenging to be estimated and located accurately. To address this, geometric shape, topological structure, and object relation are commonly employed to extract box reasoning information. In this paper, we describe the geometric feature among object points as an intra-object feature and the relation feature between different objects as an inter-object feature. Based on these two features, we propose an end-to-end point cloud geometric relation network focusing on 3D object detection, which is termed as geometric relation network (GRNet). GRNet first extracts intra-object and inter-object features for each representative point using our proposed backbone network. Then, a centralization module with a scalable loss function is proposed to centralize each representative object point to its center. Next, proposal points are sampled from these shifted points, following a proposal feature pooling operation. Finally, an object-relation learning module is applied to predict bounding box parameters. Such parameters are the additive sum of prediction results from the relation-based inter-object feature and the aggregated intra-object feature. Our model achieves state-of-the-art 3D detection results with 59.1% mAP0.25 and 39.1% mAP@0.5 on ScanNetV2 dataset, 58.4% mAP@0.25 and 34.9% mAP@0.5 on SUN RGB-D dataset. Rapid detection of 3D objects in indoor environments is essential for indoor mapping and modeling, robotic perception and localization, and building reconstruction. 3D point clouds acquired by a low-cost RGB-D camera have become one of the most commonly used data sources for 3D indoor mapping. However, due to the sparse surface, empty object center, and various scales of point cloud objects, 3D bounding boxes are challenging to be estimated and located accurately. To address this, geometric shape, topological structure, and object relation are commonly employed to extract box reasoning information. In this paper, we describe the geometric feature among object points as an intra-object feature and the relation feature between different objects as an inter-object feature. Based on these two features, we propose an end-to-end point cloud geometric relation network focusing on 3D object detection, which is termed as geometric relation network (GRNet). GRNet first extracts intra-object and inter-object features for each representative point using our proposed backbone network. Then, a centralization module with a scalable loss function is proposed to centralize each representative object point to its center. Next, proposal points are sampled from these shifted points, following a proposal feature pooling operation. Finally, an object-relation learning module is applied to predict bounding box parameters. Such parameters are the additive sum of prediction results from the relation-based inter-object feature and the aggregated intra-object feature. Our model achieves state-of-the-art 3D detection results with 59.1% mAP0.25 and 39.1% mAP@0.5 on ScanNetV2 dataset, 58.4% mAP@0.25 and 34.9% mAP@0.5 on SUN RGB-D dataset. Deep learning Elsevier RGB-D Elsevier Point cloud Elsevier Geometric relation Elsevier Indoor mapping Elsevier 3D object detection Elsevier Ma, Lingfei oth Tan, Weikai oth Sun, Chen oth Cao, Dongpu oth Li, Jonathan oth Enthalten in Elsevier Skiadopoulos, V. ELSEVIER In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid 2013 official publication of the International Society for Photogrammetry and Remote Sensing (ISPRS) Amsterdam [u.a.] (DE-627)ELV016966376 volume:165 year:2020 pages:43-53 extent:11 https://doi.org/10.1016/j.isprsjprs.2020.05.008 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_70 52.57 Energiespeicherung VZ 53.36 Energiedirektumwandler elektrische Energiespeicher VZ AR 165 2020 43-53 11 |
allfieldsGer |
10.1016/j.isprsjprs.2020.05.008 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001076.pica (DE-627)ELV050574523 (ELSEVIER)S0924-2716(20)30128-3 DE-627 ger DE-627 rakwb eng 570 VZ 610 VZ 620 VZ 52.57 bkl 53.36 bkl Li, Ying verfasserin aut GRNet: Geometric relation network for 3D object detection from point clouds 2020transfer abstract 11 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Rapid detection of 3D objects in indoor environments is essential for indoor mapping and modeling, robotic perception and localization, and building reconstruction. 3D point clouds acquired by a low-cost RGB-D camera have become one of the most commonly used data sources for 3D indoor mapping. However, due to the sparse surface, empty object center, and various scales of point cloud objects, 3D bounding boxes are challenging to be estimated and located accurately. To address this, geometric shape, topological structure, and object relation are commonly employed to extract box reasoning information. In this paper, we describe the geometric feature among object points as an intra-object feature and the relation feature between different objects as an inter-object feature. Based on these two features, we propose an end-to-end point cloud geometric relation network focusing on 3D object detection, which is termed as geometric relation network (GRNet). GRNet first extracts intra-object and inter-object features for each representative point using our proposed backbone network. Then, a centralization module with a scalable loss function is proposed to centralize each representative object point to its center. Next, proposal points are sampled from these shifted points, following a proposal feature pooling operation. Finally, an object-relation learning module is applied to predict bounding box parameters. Such parameters are the additive sum of prediction results from the relation-based inter-object feature and the aggregated intra-object feature. Our model achieves state-of-the-art 3D detection results with 59.1% mAP0.25 and 39.1% mAP@0.5 on ScanNetV2 dataset, 58.4% mAP@0.25 and 34.9% mAP@0.5 on SUN RGB-D dataset. Rapid detection of 3D objects in indoor environments is essential for indoor mapping and modeling, robotic perception and localization, and building reconstruction. 3D point clouds acquired by a low-cost RGB-D camera have become one of the most commonly used data sources for 3D indoor mapping. However, due to the sparse surface, empty object center, and various scales of point cloud objects, 3D bounding boxes are challenging to be estimated and located accurately. To address this, geometric shape, topological structure, and object relation are commonly employed to extract box reasoning information. In this paper, we describe the geometric feature among object points as an intra-object feature and the relation feature between different objects as an inter-object feature. Based on these two features, we propose an end-to-end point cloud geometric relation network focusing on 3D object detection, which is termed as geometric relation network (GRNet). GRNet first extracts intra-object and inter-object features for each representative point using our proposed backbone network. Then, a centralization module with a scalable loss function is proposed to centralize each representative object point to its center. Next, proposal points are sampled from these shifted points, following a proposal feature pooling operation. Finally, an object-relation learning module is applied to predict bounding box parameters. Such parameters are the additive sum of prediction results from the relation-based inter-object feature and the aggregated intra-object feature. Our model achieves state-of-the-art 3D detection results with 59.1% mAP0.25 and 39.1% mAP@0.5 on ScanNetV2 dataset, 58.4% mAP@0.25 and 34.9% mAP@0.5 on SUN RGB-D dataset. Deep learning Elsevier RGB-D Elsevier Point cloud Elsevier Geometric relation Elsevier Indoor mapping Elsevier 3D object detection Elsevier Ma, Lingfei oth Tan, Weikai oth Sun, Chen oth Cao, Dongpu oth Li, Jonathan oth Enthalten in Elsevier Skiadopoulos, V. ELSEVIER In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid 2013 official publication of the International Society for Photogrammetry and Remote Sensing (ISPRS) Amsterdam [u.a.] (DE-627)ELV016966376 volume:165 year:2020 pages:43-53 extent:11 https://doi.org/10.1016/j.isprsjprs.2020.05.008 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_70 52.57 Energiespeicherung VZ 53.36 Energiedirektumwandler elektrische Energiespeicher VZ AR 165 2020 43-53 11 |
allfieldsSound |
10.1016/j.isprsjprs.2020.05.008 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001076.pica (DE-627)ELV050574523 (ELSEVIER)S0924-2716(20)30128-3 DE-627 ger DE-627 rakwb eng 570 VZ 610 VZ 620 VZ 52.57 bkl 53.36 bkl Li, Ying verfasserin aut GRNet: Geometric relation network for 3D object detection from point clouds 2020transfer abstract 11 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Rapid detection of 3D objects in indoor environments is essential for indoor mapping and modeling, robotic perception and localization, and building reconstruction. 3D point clouds acquired by a low-cost RGB-D camera have become one of the most commonly used data sources for 3D indoor mapping. However, due to the sparse surface, empty object center, and various scales of point cloud objects, 3D bounding boxes are challenging to be estimated and located accurately. To address this, geometric shape, topological structure, and object relation are commonly employed to extract box reasoning information. In this paper, we describe the geometric feature among object points as an intra-object feature and the relation feature between different objects as an inter-object feature. Based on these two features, we propose an end-to-end point cloud geometric relation network focusing on 3D object detection, which is termed as geometric relation network (GRNet). GRNet first extracts intra-object and inter-object features for each representative point using our proposed backbone network. Then, a centralization module with a scalable loss function is proposed to centralize each representative object point to its center. Next, proposal points are sampled from these shifted points, following a proposal feature pooling operation. Finally, an object-relation learning module is applied to predict bounding box parameters. Such parameters are the additive sum of prediction results from the relation-based inter-object feature and the aggregated intra-object feature. Our model achieves state-of-the-art 3D detection results with 59.1% mAP0.25 and 39.1% mAP@0.5 on ScanNetV2 dataset, 58.4% mAP@0.25 and 34.9% mAP@0.5 on SUN RGB-D dataset. Rapid detection of 3D objects in indoor environments is essential for indoor mapping and modeling, robotic perception and localization, and building reconstruction. 3D point clouds acquired by a low-cost RGB-D camera have become one of the most commonly used data sources for 3D indoor mapping. However, due to the sparse surface, empty object center, and various scales of point cloud objects, 3D bounding boxes are challenging to be estimated and located accurately. To address this, geometric shape, topological structure, and object relation are commonly employed to extract box reasoning information. In this paper, we describe the geometric feature among object points as an intra-object feature and the relation feature between different objects as an inter-object feature. Based on these two features, we propose an end-to-end point cloud geometric relation network focusing on 3D object detection, which is termed as geometric relation network (GRNet). GRNet first extracts intra-object and inter-object features for each representative point using our proposed backbone network. Then, a centralization module with a scalable loss function is proposed to centralize each representative object point to its center. Next, proposal points are sampled from these shifted points, following a proposal feature pooling operation. Finally, an object-relation learning module is applied to predict bounding box parameters. Such parameters are the additive sum of prediction results from the relation-based inter-object feature and the aggregated intra-object feature. Our model achieves state-of-the-art 3D detection results with 59.1% mAP0.25 and 39.1% mAP@0.5 on ScanNetV2 dataset, 58.4% mAP@0.25 and 34.9% mAP@0.5 on SUN RGB-D dataset. Deep learning Elsevier RGB-D Elsevier Point cloud Elsevier Geometric relation Elsevier Indoor mapping Elsevier 3D object detection Elsevier Ma, Lingfei oth Tan, Weikai oth Sun, Chen oth Cao, Dongpu oth Li, Jonathan oth Enthalten in Elsevier Skiadopoulos, V. ELSEVIER In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid 2013 official publication of the International Society for Photogrammetry and Remote Sensing (ISPRS) Amsterdam [u.a.] (DE-627)ELV016966376 volume:165 year:2020 pages:43-53 extent:11 https://doi.org/10.1016/j.isprsjprs.2020.05.008 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_70 52.57 Energiespeicherung VZ 53.36 Energiedirektumwandler elektrische Energiespeicher VZ AR 165 2020 43-53 11 |
language |
English |
source |
Enthalten in In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid Amsterdam [u.a.] volume:165 year:2020 pages:43-53 extent:11 |
sourceStr |
Enthalten in In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid Amsterdam [u.a.] volume:165 year:2020 pages:43-53 extent:11 |
format_phy_str_mv |
Article |
bklname |
Energiespeicherung Energiedirektumwandler elektrische Energiespeicher |
institution |
findex.gbv.de |
topic_facet |
Deep learning RGB-D Point cloud Geometric relation Indoor mapping 3D object detection |
dewey-raw |
570 |
isfreeaccess_bool |
false |
container_title |
In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid |
authorswithroles_txt_mv |
Li, Ying @@aut@@ Ma, Lingfei @@oth@@ Tan, Weikai @@oth@@ Sun, Chen @@oth@@ Cao, Dongpu @@oth@@ Li, Jonathan @@oth@@ |
publishDateDaySort_date |
2020-01-01T00:00:00Z |
hierarchy_top_id |
ELV016966376 |
dewey-sort |
3570 |
id |
ELV050574523 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV050574523</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626030812.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">200625s2020 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.isprsjprs.2020.05.008</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001076.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV050574523</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0924-2716(20)30128-3</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">570</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">620</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">52.57</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">53.36</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Li, Ying</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">GRNet: Geometric relation network for 3D object detection from point clouds</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020transfer abstract</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">11</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Rapid detection of 3D objects in indoor environments is essential for indoor mapping and modeling, robotic perception and localization, and building reconstruction. 3D point clouds acquired by a low-cost RGB-D camera have become one of the most commonly used data sources for 3D indoor mapping. However, due to the sparse surface, empty object center, and various scales of point cloud objects, 3D bounding boxes are challenging to be estimated and located accurately. To address this, geometric shape, topological structure, and object relation are commonly employed to extract box reasoning information. In this paper, we describe the geometric feature among object points as an intra-object feature and the relation feature between different objects as an inter-object feature. Based on these two features, we propose an end-to-end point cloud geometric relation network focusing on 3D object detection, which is termed as geometric relation network (GRNet). GRNet first extracts intra-object and inter-object features for each representative point using our proposed backbone network. Then, a centralization module with a scalable loss function is proposed to centralize each representative object point to its center. Next, proposal points are sampled from these shifted points, following a proposal feature pooling operation. Finally, an object-relation learning module is applied to predict bounding box parameters. Such parameters are the additive sum of prediction results from the relation-based inter-object feature and the aggregated intra-object feature. Our model achieves state-of-the-art 3D detection results with 59.1% mAP0.25 and 39.1% mAP@0.5 on ScanNetV2 dataset, 58.4% mAP@0.25 and 34.9% mAP@0.5 on SUN RGB-D dataset.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Rapid detection of 3D objects in indoor environments is essential for indoor mapping and modeling, robotic perception and localization, and building reconstruction. 3D point clouds acquired by a low-cost RGB-D camera have become one of the most commonly used data sources for 3D indoor mapping. However, due to the sparse surface, empty object center, and various scales of point cloud objects, 3D bounding boxes are challenging to be estimated and located accurately. To address this, geometric shape, topological structure, and object relation are commonly employed to extract box reasoning information. In this paper, we describe the geometric feature among object points as an intra-object feature and the relation feature between different objects as an inter-object feature. Based on these two features, we propose an end-to-end point cloud geometric relation network focusing on 3D object detection, which is termed as geometric relation network (GRNet). GRNet first extracts intra-object and inter-object features for each representative point using our proposed backbone network. Then, a centralization module with a scalable loss function is proposed to centralize each representative object point to its center. Next, proposal points are sampled from these shifted points, following a proposal feature pooling operation. Finally, an object-relation learning module is applied to predict bounding box parameters. Such parameters are the additive sum of prediction results from the relation-based inter-object feature and the aggregated intra-object feature. Our model achieves state-of-the-art 3D detection results with 59.1% mAP0.25 and 39.1% mAP@0.5 on ScanNetV2 dataset, 58.4% mAP@0.25 and 34.9% mAP@0.5 on SUN RGB-D dataset.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Deep learning</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">RGB-D</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Point cloud</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Geometric relation</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Indoor mapping</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">3D object detection</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Ma, Lingfei</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Tan, Weikai</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Sun, Chen</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Cao, Dongpu</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Jonathan</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Elsevier</subfield><subfield code="a">Skiadopoulos, V. ELSEVIER</subfield><subfield code="t">In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid</subfield><subfield code="d">2013</subfield><subfield code="d">official publication of the International Society for Photogrammetry and Remote Sensing (ISPRS)</subfield><subfield code="g">Amsterdam [u.a.]</subfield><subfield code="w">(DE-627)ELV016966376</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:165</subfield><subfield code="g">year:2020</subfield><subfield code="g">pages:43-53</subfield><subfield code="g">extent:11</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.isprsjprs.2020.05.008</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">52.57</subfield><subfield code="j">Energiespeicherung</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">53.36</subfield><subfield code="j">Energiedirektumwandler</subfield><subfield code="j">elektrische Energiespeicher</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">165</subfield><subfield code="j">2020</subfield><subfield code="h">43-53</subfield><subfield code="g">11</subfield></datafield></record></collection>
|
author |
Li, Ying |
spellingShingle |
Li, Ying ddc 570 ddc 610 ddc 620 bkl 52.57 bkl 53.36 Elsevier Deep learning Elsevier RGB-D Elsevier Point cloud Elsevier Geometric relation Elsevier Indoor mapping Elsevier 3D object detection GRNet: Geometric relation network for 3D object detection from point clouds |
authorStr |
Li, Ying |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)ELV016966376 |
format |
electronic Article |
dewey-ones |
570 - Life sciences; biology 610 - Medicine & health 620 - Engineering & allied operations |
delete_txt_mv |
keep |
author_role |
aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
topic_title |
570 VZ 610 VZ 620 VZ 52.57 bkl 53.36 bkl GRNet: Geometric relation network for 3D object detection from point clouds Deep learning Elsevier RGB-D Elsevier Point cloud Elsevier Geometric relation Elsevier Indoor mapping Elsevier 3D object detection Elsevier |
topic |
ddc 570 ddc 610 ddc 620 bkl 52.57 bkl 53.36 Elsevier Deep learning Elsevier RGB-D Elsevier Point cloud Elsevier Geometric relation Elsevier Indoor mapping Elsevier 3D object detection |
topic_unstemmed |
ddc 570 ddc 610 ddc 620 bkl 52.57 bkl 53.36 Elsevier Deep learning Elsevier RGB-D Elsevier Point cloud Elsevier Geometric relation Elsevier Indoor mapping Elsevier 3D object detection |
topic_browse |
ddc 570 ddc 610 ddc 620 bkl 52.57 bkl 53.36 Elsevier Deep learning Elsevier RGB-D Elsevier Point cloud Elsevier Geometric relation Elsevier Indoor mapping Elsevier 3D object detection |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
zu |
author2_variant |
l m lm w t wt c s cs d c dc j l jl |
hierarchy_parent_title |
In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid |
hierarchy_parent_id |
ELV016966376 |
dewey-tens |
570 - Life sciences; biology 610 - Medicine & health 620 - Engineering |
hierarchy_top_title |
In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)ELV016966376 |
title |
GRNet: Geometric relation network for 3D object detection from point clouds |
ctrlnum |
(DE-627)ELV050574523 (ELSEVIER)S0924-2716(20)30128-3 |
title_full |
GRNet: Geometric relation network for 3D object detection from point clouds |
author_sort |
Li, Ying |
journal |
In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid |
journalStr |
In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
500 - Science 600 - Technology |
recordtype |
marc |
publishDateSort |
2020 |
contenttype_str_mv |
zzz |
container_start_page |
43 |
author_browse |
Li, Ying |
container_volume |
165 |
physical |
11 |
class |
570 VZ 610 VZ 620 VZ 52.57 bkl 53.36 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Li, Ying |
doi_str_mv |
10.1016/j.isprsjprs.2020.05.008 |
dewey-full |
570 610 620 |
title_sort |
grnet: geometric relation network for 3d object detection from point clouds |
title_auth |
GRNet: Geometric relation network for 3D object detection from point clouds |
abstract |
Rapid detection of 3D objects in indoor environments is essential for indoor mapping and modeling, robotic perception and localization, and building reconstruction. 3D point clouds acquired by a low-cost RGB-D camera have become one of the most commonly used data sources for 3D indoor mapping. However, due to the sparse surface, empty object center, and various scales of point cloud objects, 3D bounding boxes are challenging to be estimated and located accurately. To address this, geometric shape, topological structure, and object relation are commonly employed to extract box reasoning information. In this paper, we describe the geometric feature among object points as an intra-object feature and the relation feature between different objects as an inter-object feature. Based on these two features, we propose an end-to-end point cloud geometric relation network focusing on 3D object detection, which is termed as geometric relation network (GRNet). GRNet first extracts intra-object and inter-object features for each representative point using our proposed backbone network. Then, a centralization module with a scalable loss function is proposed to centralize each representative object point to its center. Next, proposal points are sampled from these shifted points, following a proposal feature pooling operation. Finally, an object-relation learning module is applied to predict bounding box parameters. Such parameters are the additive sum of prediction results from the relation-based inter-object feature and the aggregated intra-object feature. Our model achieves state-of-the-art 3D detection results with 59.1% mAP0.25 and 39.1% mAP@0.5 on ScanNetV2 dataset, 58.4% mAP@0.25 and 34.9% mAP@0.5 on SUN RGB-D dataset. |
abstractGer |
Rapid detection of 3D objects in indoor environments is essential for indoor mapping and modeling, robotic perception and localization, and building reconstruction. 3D point clouds acquired by a low-cost RGB-D camera have become one of the most commonly used data sources for 3D indoor mapping. However, due to the sparse surface, empty object center, and various scales of point cloud objects, 3D bounding boxes are challenging to be estimated and located accurately. To address this, geometric shape, topological structure, and object relation are commonly employed to extract box reasoning information. In this paper, we describe the geometric feature among object points as an intra-object feature and the relation feature between different objects as an inter-object feature. Based on these two features, we propose an end-to-end point cloud geometric relation network focusing on 3D object detection, which is termed as geometric relation network (GRNet). GRNet first extracts intra-object and inter-object features for each representative point using our proposed backbone network. Then, a centralization module with a scalable loss function is proposed to centralize each representative object point to its center. Next, proposal points are sampled from these shifted points, following a proposal feature pooling operation. Finally, an object-relation learning module is applied to predict bounding box parameters. Such parameters are the additive sum of prediction results from the relation-based inter-object feature and the aggregated intra-object feature. Our model achieves state-of-the-art 3D detection results with 59.1% mAP0.25 and 39.1% mAP@0.5 on ScanNetV2 dataset, 58.4% mAP@0.25 and 34.9% mAP@0.5 on SUN RGB-D dataset. |
abstract_unstemmed |
Rapid detection of 3D objects in indoor environments is essential for indoor mapping and modeling, robotic perception and localization, and building reconstruction. 3D point clouds acquired by a low-cost RGB-D camera have become one of the most commonly used data sources for 3D indoor mapping. However, due to the sparse surface, empty object center, and various scales of point cloud objects, 3D bounding boxes are challenging to be estimated and located accurately. To address this, geometric shape, topological structure, and object relation are commonly employed to extract box reasoning information. In this paper, we describe the geometric feature among object points as an intra-object feature and the relation feature between different objects as an inter-object feature. Based on these two features, we propose an end-to-end point cloud geometric relation network focusing on 3D object detection, which is termed as geometric relation network (GRNet). GRNet first extracts intra-object and inter-object features for each representative point using our proposed backbone network. Then, a centralization module with a scalable loss function is proposed to centralize each representative object point to its center. Next, proposal points are sampled from these shifted points, following a proposal feature pooling operation. Finally, an object-relation learning module is applied to predict bounding box parameters. Such parameters are the additive sum of prediction results from the relation-based inter-object feature and the aggregated intra-object feature. Our model achieves state-of-the-art 3D detection results with 59.1% mAP0.25 and 39.1% mAP@0.5 on ScanNetV2 dataset, 58.4% mAP@0.25 and 34.9% mAP@0.5 on SUN RGB-D dataset. |
collection_details |
GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_70 |
title_short |
GRNet: Geometric relation network for 3D object detection from point clouds |
url |
https://doi.org/10.1016/j.isprsjprs.2020.05.008 |
remote_bool |
true |
author2 |
Ma, Lingfei Tan, Weikai Sun, Chen Cao, Dongpu Li, Jonathan |
author2Str |
Ma, Lingfei Tan, Weikai Sun, Chen Cao, Dongpu Li, Jonathan |
ppnlink |
ELV016966376 |
mediatype_str_mv |
z |
isOA_txt |
false |
hochschulschrift_bool |
false |
author2_role |
oth oth oth oth oth |
doi_str |
10.1016/j.isprsjprs.2020.05.008 |
up_date |
2024-07-06T17:54:38.447Z |
_version_ |
1803853211578138624 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV050574523</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626030812.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">200625s2020 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.isprsjprs.2020.05.008</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001076.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV050574523</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0924-2716(20)30128-3</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">570</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">620</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">52.57</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">53.36</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Li, Ying</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">GRNet: Geometric relation network for 3D object detection from point clouds</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020transfer abstract</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">11</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Rapid detection of 3D objects in indoor environments is essential for indoor mapping and modeling, robotic perception and localization, and building reconstruction. 3D point clouds acquired by a low-cost RGB-D camera have become one of the most commonly used data sources for 3D indoor mapping. However, due to the sparse surface, empty object center, and various scales of point cloud objects, 3D bounding boxes are challenging to be estimated and located accurately. To address this, geometric shape, topological structure, and object relation are commonly employed to extract box reasoning information. In this paper, we describe the geometric feature among object points as an intra-object feature and the relation feature between different objects as an inter-object feature. Based on these two features, we propose an end-to-end point cloud geometric relation network focusing on 3D object detection, which is termed as geometric relation network (GRNet). GRNet first extracts intra-object and inter-object features for each representative point using our proposed backbone network. Then, a centralization module with a scalable loss function is proposed to centralize each representative object point to its center. Next, proposal points are sampled from these shifted points, following a proposal feature pooling operation. Finally, an object-relation learning module is applied to predict bounding box parameters. Such parameters are the additive sum of prediction results from the relation-based inter-object feature and the aggregated intra-object feature. Our model achieves state-of-the-art 3D detection results with 59.1% mAP0.25 and 39.1% mAP@0.5 on ScanNetV2 dataset, 58.4% mAP@0.25 and 34.9% mAP@0.5 on SUN RGB-D dataset.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Rapid detection of 3D objects in indoor environments is essential for indoor mapping and modeling, robotic perception and localization, and building reconstruction. 3D point clouds acquired by a low-cost RGB-D camera have become one of the most commonly used data sources for 3D indoor mapping. However, due to the sparse surface, empty object center, and various scales of point cloud objects, 3D bounding boxes are challenging to be estimated and located accurately. To address this, geometric shape, topological structure, and object relation are commonly employed to extract box reasoning information. In this paper, we describe the geometric feature among object points as an intra-object feature and the relation feature between different objects as an inter-object feature. Based on these two features, we propose an end-to-end point cloud geometric relation network focusing on 3D object detection, which is termed as geometric relation network (GRNet). GRNet first extracts intra-object and inter-object features for each representative point using our proposed backbone network. Then, a centralization module with a scalable loss function is proposed to centralize each representative object point to its center. Next, proposal points are sampled from these shifted points, following a proposal feature pooling operation. Finally, an object-relation learning module is applied to predict bounding box parameters. Such parameters are the additive sum of prediction results from the relation-based inter-object feature and the aggregated intra-object feature. Our model achieves state-of-the-art 3D detection results with 59.1% mAP0.25 and 39.1% mAP@0.5 on ScanNetV2 dataset, 58.4% mAP@0.25 and 34.9% mAP@0.5 on SUN RGB-D dataset.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Deep learning</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">RGB-D</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Point cloud</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Geometric relation</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Indoor mapping</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">3D object detection</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Ma, Lingfei</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Tan, Weikai</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Sun, Chen</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Cao, Dongpu</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Jonathan</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Elsevier</subfield><subfield code="a">Skiadopoulos, V. ELSEVIER</subfield><subfield code="t">In Vitro and In Vivo UV Light Skin Protection by an Antioxidant Derivative of NSAID Tolfenamic Acid</subfield><subfield code="d">2013</subfield><subfield code="d">official publication of the International Society for Photogrammetry and Remote Sensing (ISPRS)</subfield><subfield code="g">Amsterdam [u.a.]</subfield><subfield code="w">(DE-627)ELV016966376</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:165</subfield><subfield code="g">year:2020</subfield><subfield code="g">pages:43-53</subfield><subfield code="g">extent:11</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.isprsjprs.2020.05.008</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">52.57</subfield><subfield code="j">Energiespeicherung</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">53.36</subfield><subfield code="j">Energiedirektumwandler</subfield><subfield code="j">elektrische Energiespeicher</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">165</subfield><subfield code="j">2020</subfield><subfield code="h">43-53</subfield><subfield code="g">11</subfield></datafield></record></collection>
|
score |
7.4004793 |