Towards end-to-end deep RNN based networks to precisely regress of the lettuce plant height by single perspective sparse 3D point cloud
Nowadays, 3D point cloud is supposed to be the most direct and effective data form for studying plant morphology structure. However, automatic and high-throughput acquisition of accurate individual plant height traits from 3D point cloud remains an urgent challenging problem. Summarizing the related...
Ausführliche Beschreibung
Autor*in: |
Li, Jingsong [verfasserIn] Wang, Ying [verfasserIn] Zheng, LiHua [verfasserIn] Zhang, Man [verfasserIn] Wang, Minjuan [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2023 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
Enthalten in: Expert systems with applications - Amsterdam [u.a.] : Elsevier Science, 1990, 229 |
---|---|
Übergeordnetes Werk: |
volume:229 |
DOI / URN: |
10.1016/j.eswa.2023.120497 |
---|
Katalog-ID: |
ELV010361359 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV010361359 | ||
003 | DE-627 | ||
005 | 20230618073033.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230613s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.eswa.2023.120497 |2 doi | |
035 | |a (DE-627)ELV010361359 | ||
035 | |a (ELSEVIER)S0957-4174(23)00999-5 | ||
040 | |a DE-627 |b ger |c DE-627 |e rda | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |q VZ |
084 | |a 54.72 |2 bkl | ||
100 | 1 | |a Li, Jingsong |e verfasserin |4 aut | |
245 | 1 | 0 | |a Towards end-to-end deep RNN based networks to precisely regress of the lettuce plant height by single perspective sparse 3D point cloud |
264 | 1 | |c 2023 | |
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Nowadays, 3D point cloud is supposed to be the most direct and effective data form for studying plant morphology structure. However, automatic and high-throughput acquisition of accurate individual plant height traits from 3D point cloud remains an urgent challenging problem. Summarizing the related research results in recent years, the factors limiting its application mainly come from these aspects: (1) Many existing methods require spatial auxiliary information such as ground control points (GCP), digital terrain models (DTM) and digital surface models (DSM) to obtain accurate plant height; (2) For 3D point cloud data in different environments, specialized modeling and careful parameter fine-tuning are usually required; (3) Sometimes, the point cloud processing involves the combined utilization of multiple programming languages and software, which is difficult for system integration. Focusing on these challenges, firstly, we proposed a novel end-to-end deep Recurrent Neural Network (RNN) based regression network framework called DRN, which consists of three parts: point cloud feature extraction network, deep RNN and regression network. The convolution operations-based point cloud feature extraction network is function as filtering noise, outliers and redundant information; The deep RNN network with long and short-term memory (LSTM) ability is used to learn the relationships between the feature points on the high-dimensional feature sequence separated by a certain distance; regression network is used to regress the output from deep RNN to plant height value. Experiments results on the 3rd Greenhouse Growing Challenge datasets show that DRN can directly regress the plant height of a single plant effectively without manual operations and the participation of spatial auxiliary information with an R2 of 0.948 and a relative root mean square error (RRMSE) of 10.06% in four different varieties of lettuce at different growth period. After studying the influence of the weights of the x, y, z coordinate of the input 3D point cloud on the regression result, then, we design a Dimension Attention (DA) module at the front end of the feature extraction network to learning the characteristic coordinate weight for every input point cloud sample. The DRN network with a DA module is called D-DRN, experiment results indicate D-DRN tend to achieve better result (R2 = 0.960; RRMSE = 8.680%) than DRN. Considering the end-to-end-based DRN and D-DRN network capable of ease of integration and their considerable prediction accuracy on public datasets, we believe they has a certain complementary effect on the existing study methods of obtaining plant morphological structure phenotype by point cloud data. | ||
650 | 4 | |a 3D Point cloud | |
650 | 4 | |a Remote sensing | |
650 | 4 | |a Deep RNN | |
650 | 4 | |a Plant height | |
650 | 4 | |a Regression network | |
650 | 4 | |a Vegetation structural parameters | |
700 | 1 | |a Wang, Ying |e verfasserin |4 aut | |
700 | 1 | |a Zheng, LiHua |e verfasserin |4 aut | |
700 | 1 | |a Zhang, Man |e verfasserin |4 aut | |
700 | 1 | |a Wang, Minjuan |e verfasserin |0 (orcid)0000-0002-7520-1726 |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Expert systems with applications |d Amsterdam [u.a.] : Elsevier Science, 1990 |g 229 |h Online-Ressource |w (DE-627)320577961 |w (DE-600)2017237-0 |w (DE-576)11481807X |7 nnns |
773 | 1 | 8 | |g volume:229 |
912 | |a GBV_USEFLAG_U | ||
912 | |a SYSFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_101 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_150 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_187 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2001 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2008 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2190 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
912 | |a GBV_ILN_4700 | ||
936 | b | k | |a 54.72 |j Künstliche Intelligenz |q VZ |
951 | |a AR | ||
952 | |d 229 |
author_variant |
j l jl y w yw l z lz m z mz m w mw |
---|---|
matchkey_str |
lijingsongwangyingzhenglihuazhangmanwang:2023----:oadedondernaentoktpeieyersoteetcpategtyige |
hierarchy_sort_str |
2023 |
bklnumber |
54.72 |
publishDate |
2023 |
allfields |
10.1016/j.eswa.2023.120497 doi (DE-627)ELV010361359 (ELSEVIER)S0957-4174(23)00999-5 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Li, Jingsong verfasserin aut Towards end-to-end deep RNN based networks to precisely regress of the lettuce plant height by single perspective sparse 3D point cloud 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Nowadays, 3D point cloud is supposed to be the most direct and effective data form for studying plant morphology structure. However, automatic and high-throughput acquisition of accurate individual plant height traits from 3D point cloud remains an urgent challenging problem. Summarizing the related research results in recent years, the factors limiting its application mainly come from these aspects: (1) Many existing methods require spatial auxiliary information such as ground control points (GCP), digital terrain models (DTM) and digital surface models (DSM) to obtain accurate plant height; (2) For 3D point cloud data in different environments, specialized modeling and careful parameter fine-tuning are usually required; (3) Sometimes, the point cloud processing involves the combined utilization of multiple programming languages and software, which is difficult for system integration. Focusing on these challenges, firstly, we proposed a novel end-to-end deep Recurrent Neural Network (RNN) based regression network framework called DRN, which consists of three parts: point cloud feature extraction network, deep RNN and regression network. The convolution operations-based point cloud feature extraction network is function as filtering noise, outliers and redundant information; The deep RNN network with long and short-term memory (LSTM) ability is used to learn the relationships between the feature points on the high-dimensional feature sequence separated by a certain distance; regression network is used to regress the output from deep RNN to plant height value. Experiments results on the 3rd Greenhouse Growing Challenge datasets show that DRN can directly regress the plant height of a single plant effectively without manual operations and the participation of spatial auxiliary information with an R2 of 0.948 and a relative root mean square error (RRMSE) of 10.06% in four different varieties of lettuce at different growth period. After studying the influence of the weights of the x, y, z coordinate of the input 3D point cloud on the regression result, then, we design a Dimension Attention (DA) module at the front end of the feature extraction network to learning the characteristic coordinate weight for every input point cloud sample. The DRN network with a DA module is called D-DRN, experiment results indicate D-DRN tend to achieve better result (R2 = 0.960; RRMSE = 8.680%) than DRN. Considering the end-to-end-based DRN and D-DRN network capable of ease of integration and their considerable prediction accuracy on public datasets, we believe they has a certain complementary effect on the existing study methods of obtaining plant morphological structure phenotype by point cloud data. 3D Point cloud Remote sensing Deep RNN Plant height Regression network Vegetation structural parameters Wang, Ying verfasserin aut Zheng, LiHua verfasserin aut Zhang, Man verfasserin aut Wang, Minjuan verfasserin (orcid)0000-0002-7520-1726 aut Enthalten in Expert systems with applications Amsterdam [u.a.] : Elsevier Science, 1990 229 Online-Ressource (DE-627)320577961 (DE-600)2017237-0 (DE-576)11481807X nnns volume:229 GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 229 |
spelling |
10.1016/j.eswa.2023.120497 doi (DE-627)ELV010361359 (ELSEVIER)S0957-4174(23)00999-5 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Li, Jingsong verfasserin aut Towards end-to-end deep RNN based networks to precisely regress of the lettuce plant height by single perspective sparse 3D point cloud 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Nowadays, 3D point cloud is supposed to be the most direct and effective data form for studying plant morphology structure. However, automatic and high-throughput acquisition of accurate individual plant height traits from 3D point cloud remains an urgent challenging problem. Summarizing the related research results in recent years, the factors limiting its application mainly come from these aspects: (1) Many existing methods require spatial auxiliary information such as ground control points (GCP), digital terrain models (DTM) and digital surface models (DSM) to obtain accurate plant height; (2) For 3D point cloud data in different environments, specialized modeling and careful parameter fine-tuning are usually required; (3) Sometimes, the point cloud processing involves the combined utilization of multiple programming languages and software, which is difficult for system integration. Focusing on these challenges, firstly, we proposed a novel end-to-end deep Recurrent Neural Network (RNN) based regression network framework called DRN, which consists of three parts: point cloud feature extraction network, deep RNN and regression network. The convolution operations-based point cloud feature extraction network is function as filtering noise, outliers and redundant information; The deep RNN network with long and short-term memory (LSTM) ability is used to learn the relationships between the feature points on the high-dimensional feature sequence separated by a certain distance; regression network is used to regress the output from deep RNN to plant height value. Experiments results on the 3rd Greenhouse Growing Challenge datasets show that DRN can directly regress the plant height of a single plant effectively without manual operations and the participation of spatial auxiliary information with an R2 of 0.948 and a relative root mean square error (RRMSE) of 10.06% in four different varieties of lettuce at different growth period. After studying the influence of the weights of the x, y, z coordinate of the input 3D point cloud on the regression result, then, we design a Dimension Attention (DA) module at the front end of the feature extraction network to learning the characteristic coordinate weight for every input point cloud sample. The DRN network with a DA module is called D-DRN, experiment results indicate D-DRN tend to achieve better result (R2 = 0.960; RRMSE = 8.680%) than DRN. Considering the end-to-end-based DRN and D-DRN network capable of ease of integration and their considerable prediction accuracy on public datasets, we believe they has a certain complementary effect on the existing study methods of obtaining plant morphological structure phenotype by point cloud data. 3D Point cloud Remote sensing Deep RNN Plant height Regression network Vegetation structural parameters Wang, Ying verfasserin aut Zheng, LiHua verfasserin aut Zhang, Man verfasserin aut Wang, Minjuan verfasserin (orcid)0000-0002-7520-1726 aut Enthalten in Expert systems with applications Amsterdam [u.a.] : Elsevier Science, 1990 229 Online-Ressource (DE-627)320577961 (DE-600)2017237-0 (DE-576)11481807X nnns volume:229 GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 229 |
allfields_unstemmed |
10.1016/j.eswa.2023.120497 doi (DE-627)ELV010361359 (ELSEVIER)S0957-4174(23)00999-5 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Li, Jingsong verfasserin aut Towards end-to-end deep RNN based networks to precisely regress of the lettuce plant height by single perspective sparse 3D point cloud 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Nowadays, 3D point cloud is supposed to be the most direct and effective data form for studying plant morphology structure. However, automatic and high-throughput acquisition of accurate individual plant height traits from 3D point cloud remains an urgent challenging problem. Summarizing the related research results in recent years, the factors limiting its application mainly come from these aspects: (1) Many existing methods require spatial auxiliary information such as ground control points (GCP), digital terrain models (DTM) and digital surface models (DSM) to obtain accurate plant height; (2) For 3D point cloud data in different environments, specialized modeling and careful parameter fine-tuning are usually required; (3) Sometimes, the point cloud processing involves the combined utilization of multiple programming languages and software, which is difficult for system integration. Focusing on these challenges, firstly, we proposed a novel end-to-end deep Recurrent Neural Network (RNN) based regression network framework called DRN, which consists of three parts: point cloud feature extraction network, deep RNN and regression network. The convolution operations-based point cloud feature extraction network is function as filtering noise, outliers and redundant information; The deep RNN network with long and short-term memory (LSTM) ability is used to learn the relationships between the feature points on the high-dimensional feature sequence separated by a certain distance; regression network is used to regress the output from deep RNN to plant height value. Experiments results on the 3rd Greenhouse Growing Challenge datasets show that DRN can directly regress the plant height of a single plant effectively without manual operations and the participation of spatial auxiliary information with an R2 of 0.948 and a relative root mean square error (RRMSE) of 10.06% in four different varieties of lettuce at different growth period. After studying the influence of the weights of the x, y, z coordinate of the input 3D point cloud on the regression result, then, we design a Dimension Attention (DA) module at the front end of the feature extraction network to learning the characteristic coordinate weight for every input point cloud sample. The DRN network with a DA module is called D-DRN, experiment results indicate D-DRN tend to achieve better result (R2 = 0.960; RRMSE = 8.680%) than DRN. Considering the end-to-end-based DRN and D-DRN network capable of ease of integration and their considerable prediction accuracy on public datasets, we believe they has a certain complementary effect on the existing study methods of obtaining plant morphological structure phenotype by point cloud data. 3D Point cloud Remote sensing Deep RNN Plant height Regression network Vegetation structural parameters Wang, Ying verfasserin aut Zheng, LiHua verfasserin aut Zhang, Man verfasserin aut Wang, Minjuan verfasserin (orcid)0000-0002-7520-1726 aut Enthalten in Expert systems with applications Amsterdam [u.a.] : Elsevier Science, 1990 229 Online-Ressource (DE-627)320577961 (DE-600)2017237-0 (DE-576)11481807X nnns volume:229 GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 229 |
allfieldsGer |
10.1016/j.eswa.2023.120497 doi (DE-627)ELV010361359 (ELSEVIER)S0957-4174(23)00999-5 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Li, Jingsong verfasserin aut Towards end-to-end deep RNN based networks to precisely regress of the lettuce plant height by single perspective sparse 3D point cloud 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Nowadays, 3D point cloud is supposed to be the most direct and effective data form for studying plant morphology structure. However, automatic and high-throughput acquisition of accurate individual plant height traits from 3D point cloud remains an urgent challenging problem. Summarizing the related research results in recent years, the factors limiting its application mainly come from these aspects: (1) Many existing methods require spatial auxiliary information such as ground control points (GCP), digital terrain models (DTM) and digital surface models (DSM) to obtain accurate plant height; (2) For 3D point cloud data in different environments, specialized modeling and careful parameter fine-tuning are usually required; (3) Sometimes, the point cloud processing involves the combined utilization of multiple programming languages and software, which is difficult for system integration. Focusing on these challenges, firstly, we proposed a novel end-to-end deep Recurrent Neural Network (RNN) based regression network framework called DRN, which consists of three parts: point cloud feature extraction network, deep RNN and regression network. The convolution operations-based point cloud feature extraction network is function as filtering noise, outliers and redundant information; The deep RNN network with long and short-term memory (LSTM) ability is used to learn the relationships between the feature points on the high-dimensional feature sequence separated by a certain distance; regression network is used to regress the output from deep RNN to plant height value. Experiments results on the 3rd Greenhouse Growing Challenge datasets show that DRN can directly regress the plant height of a single plant effectively without manual operations and the participation of spatial auxiliary information with an R2 of 0.948 and a relative root mean square error (RRMSE) of 10.06% in four different varieties of lettuce at different growth period. After studying the influence of the weights of the x, y, z coordinate of the input 3D point cloud on the regression result, then, we design a Dimension Attention (DA) module at the front end of the feature extraction network to learning the characteristic coordinate weight for every input point cloud sample. The DRN network with a DA module is called D-DRN, experiment results indicate D-DRN tend to achieve better result (R2 = 0.960; RRMSE = 8.680%) than DRN. Considering the end-to-end-based DRN and D-DRN network capable of ease of integration and their considerable prediction accuracy on public datasets, we believe they has a certain complementary effect on the existing study methods of obtaining plant morphological structure phenotype by point cloud data. 3D Point cloud Remote sensing Deep RNN Plant height Regression network Vegetation structural parameters Wang, Ying verfasserin aut Zheng, LiHua verfasserin aut Zhang, Man verfasserin aut Wang, Minjuan verfasserin (orcid)0000-0002-7520-1726 aut Enthalten in Expert systems with applications Amsterdam [u.a.] : Elsevier Science, 1990 229 Online-Ressource (DE-627)320577961 (DE-600)2017237-0 (DE-576)11481807X nnns volume:229 GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 229 |
allfieldsSound |
10.1016/j.eswa.2023.120497 doi (DE-627)ELV010361359 (ELSEVIER)S0957-4174(23)00999-5 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Li, Jingsong verfasserin aut Towards end-to-end deep RNN based networks to precisely regress of the lettuce plant height by single perspective sparse 3D point cloud 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Nowadays, 3D point cloud is supposed to be the most direct and effective data form for studying plant morphology structure. However, automatic and high-throughput acquisition of accurate individual plant height traits from 3D point cloud remains an urgent challenging problem. Summarizing the related research results in recent years, the factors limiting its application mainly come from these aspects: (1) Many existing methods require spatial auxiliary information such as ground control points (GCP), digital terrain models (DTM) and digital surface models (DSM) to obtain accurate plant height; (2) For 3D point cloud data in different environments, specialized modeling and careful parameter fine-tuning are usually required; (3) Sometimes, the point cloud processing involves the combined utilization of multiple programming languages and software, which is difficult for system integration. Focusing on these challenges, firstly, we proposed a novel end-to-end deep Recurrent Neural Network (RNN) based regression network framework called DRN, which consists of three parts: point cloud feature extraction network, deep RNN and regression network. The convolution operations-based point cloud feature extraction network is function as filtering noise, outliers and redundant information; The deep RNN network with long and short-term memory (LSTM) ability is used to learn the relationships between the feature points on the high-dimensional feature sequence separated by a certain distance; regression network is used to regress the output from deep RNN to plant height value. Experiments results on the 3rd Greenhouse Growing Challenge datasets show that DRN can directly regress the plant height of a single plant effectively without manual operations and the participation of spatial auxiliary information with an R2 of 0.948 and a relative root mean square error (RRMSE) of 10.06% in four different varieties of lettuce at different growth period. After studying the influence of the weights of the x, y, z coordinate of the input 3D point cloud on the regression result, then, we design a Dimension Attention (DA) module at the front end of the feature extraction network to learning the characteristic coordinate weight for every input point cloud sample. The DRN network with a DA module is called D-DRN, experiment results indicate D-DRN tend to achieve better result (R2 = 0.960; RRMSE = 8.680%) than DRN. Considering the end-to-end-based DRN and D-DRN network capable of ease of integration and their considerable prediction accuracy on public datasets, we believe they has a certain complementary effect on the existing study methods of obtaining plant morphological structure phenotype by point cloud data. 3D Point cloud Remote sensing Deep RNN Plant height Regression network Vegetation structural parameters Wang, Ying verfasserin aut Zheng, LiHua verfasserin aut Zhang, Man verfasserin aut Wang, Minjuan verfasserin (orcid)0000-0002-7520-1726 aut Enthalten in Expert systems with applications Amsterdam [u.a.] : Elsevier Science, 1990 229 Online-Ressource (DE-627)320577961 (DE-600)2017237-0 (DE-576)11481807X nnns volume:229 GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 229 |
language |
English |
source |
Enthalten in Expert systems with applications 229 volume:229 |
sourceStr |
Enthalten in Expert systems with applications 229 volume:229 |
format_phy_str_mv |
Article |
bklname |
Künstliche Intelligenz |
institution |
findex.gbv.de |
topic_facet |
3D Point cloud Remote sensing Deep RNN Plant height Regression network Vegetation structural parameters |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
Expert systems with applications |
authorswithroles_txt_mv |
Li, Jingsong @@aut@@ Wang, Ying @@aut@@ Zheng, LiHua @@aut@@ Zhang, Man @@aut@@ Wang, Minjuan @@aut@@ |
publishDateDaySort_date |
2023-01-01T00:00:00Z |
hierarchy_top_id |
320577961 |
dewey-sort |
14 |
id |
ELV010361359 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV010361359</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230618073033.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230613s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.eswa.2023.120497</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV010361359</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0957-4174(23)00999-5</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.72</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Li, Jingsong</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Towards end-to-end deep RNN based networks to precisely regress of the lettuce plant height by single perspective sparse 3D point cloud</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Nowadays, 3D point cloud is supposed to be the most direct and effective data form for studying plant morphology structure. However, automatic and high-throughput acquisition of accurate individual plant height traits from 3D point cloud remains an urgent challenging problem. Summarizing the related research results in recent years, the factors limiting its application mainly come from these aspects: (1) Many existing methods require spatial auxiliary information such as ground control points (GCP), digital terrain models (DTM) and digital surface models (DSM) to obtain accurate plant height; (2) For 3D point cloud data in different environments, specialized modeling and careful parameter fine-tuning are usually required; (3) Sometimes, the point cloud processing involves the combined utilization of multiple programming languages and software, which is difficult for system integration. Focusing on these challenges, firstly, we proposed a novel end-to-end deep Recurrent Neural Network (RNN) based regression network framework called DRN, which consists of three parts: point cloud feature extraction network, deep RNN and regression network. The convolution operations-based point cloud feature extraction network is function as filtering noise, outliers and redundant information; The deep RNN network with long and short-term memory (LSTM) ability is used to learn the relationships between the feature points on the high-dimensional feature sequence separated by a certain distance; regression network is used to regress the output from deep RNN to plant height value. Experiments results on the 3rd Greenhouse Growing Challenge datasets show that DRN can directly regress the plant height of a single plant effectively without manual operations and the participation of spatial auxiliary information with an R2 of 0.948 and a relative root mean square error (RRMSE) of 10.06% in four different varieties of lettuce at different growth period. After studying the influence of the weights of the x, y, z coordinate of the input 3D point cloud on the regression result, then, we design a Dimension Attention (DA) module at the front end of the feature extraction network to learning the characteristic coordinate weight for every input point cloud sample. The DRN network with a DA module is called D-DRN, experiment results indicate D-DRN tend to achieve better result (R2 = 0.960; RRMSE = 8.680%) than DRN. Considering the end-to-end-based DRN and D-DRN network capable of ease of integration and their considerable prediction accuracy on public datasets, we believe they has a certain complementary effect on the existing study methods of obtaining plant morphological structure phenotype by point cloud data.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">3D Point cloud</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Remote sensing</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Deep RNN</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Plant height</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Regression network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Vegetation structural parameters</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wang, Ying</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zheng, LiHua</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhang, Man</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wang, Minjuan</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-7520-1726</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Expert systems with applications</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier Science, 1990</subfield><subfield code="g">229</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)320577961</subfield><subfield code="w">(DE-600)2017237-0</subfield><subfield code="w">(DE-576)11481807X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:229</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.72</subfield><subfield code="j">Künstliche Intelligenz</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">229</subfield></datafield></record></collection>
|
author |
Li, Jingsong |
spellingShingle |
Li, Jingsong ddc 004 bkl 54.72 misc 3D Point cloud misc Remote sensing misc Deep RNN misc Plant height misc Regression network misc Vegetation structural parameters Towards end-to-end deep RNN based networks to precisely regress of the lettuce plant height by single perspective sparse 3D point cloud |
authorStr |
Li, Jingsong |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)320577961 |
format |
electronic Article |
dewey-ones |
004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
topic_title |
004 VZ 54.72 bkl Towards end-to-end deep RNN based networks to precisely regress of the lettuce plant height by single perspective sparse 3D point cloud 3D Point cloud Remote sensing Deep RNN Plant height Regression network Vegetation structural parameters |
topic |
ddc 004 bkl 54.72 misc 3D Point cloud misc Remote sensing misc Deep RNN misc Plant height misc Regression network misc Vegetation structural parameters |
topic_unstemmed |
ddc 004 bkl 54.72 misc 3D Point cloud misc Remote sensing misc Deep RNN misc Plant height misc Regression network misc Vegetation structural parameters |
topic_browse |
ddc 004 bkl 54.72 misc 3D Point cloud misc Remote sensing misc Deep RNN misc Plant height misc Regression network misc Vegetation structural parameters |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Expert systems with applications |
hierarchy_parent_id |
320577961 |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
Expert systems with applications |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)320577961 (DE-600)2017237-0 (DE-576)11481807X |
title |
Towards end-to-end deep RNN based networks to precisely regress of the lettuce plant height by single perspective sparse 3D point cloud |
ctrlnum |
(DE-627)ELV010361359 (ELSEVIER)S0957-4174(23)00999-5 |
title_full |
Towards end-to-end deep RNN based networks to precisely regress of the lettuce plant height by single perspective sparse 3D point cloud |
author_sort |
Li, Jingsong |
journal |
Expert systems with applications |
journalStr |
Expert systems with applications |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2023 |
contenttype_str_mv |
zzz |
author_browse |
Li, Jingsong Wang, Ying Zheng, LiHua Zhang, Man Wang, Minjuan |
container_volume |
229 |
class |
004 VZ 54.72 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Li, Jingsong |
doi_str_mv |
10.1016/j.eswa.2023.120497 |
normlink |
(ORCID)0000-0002-7520-1726 |
normlink_prefix_str_mv |
(orcid)0000-0002-7520-1726 |
dewey-full |
004 |
author2-role |
verfasserin |
title_sort |
towards end-to-end deep rnn based networks to precisely regress of the lettuce plant height by single perspective sparse 3d point cloud |
title_auth |
Towards end-to-end deep RNN based networks to precisely regress of the lettuce plant height by single perspective sparse 3D point cloud |
abstract |
Nowadays, 3D point cloud is supposed to be the most direct and effective data form for studying plant morphology structure. However, automatic and high-throughput acquisition of accurate individual plant height traits from 3D point cloud remains an urgent challenging problem. Summarizing the related research results in recent years, the factors limiting its application mainly come from these aspects: (1) Many existing methods require spatial auxiliary information such as ground control points (GCP), digital terrain models (DTM) and digital surface models (DSM) to obtain accurate plant height; (2) For 3D point cloud data in different environments, specialized modeling and careful parameter fine-tuning are usually required; (3) Sometimes, the point cloud processing involves the combined utilization of multiple programming languages and software, which is difficult for system integration. Focusing on these challenges, firstly, we proposed a novel end-to-end deep Recurrent Neural Network (RNN) based regression network framework called DRN, which consists of three parts: point cloud feature extraction network, deep RNN and regression network. The convolution operations-based point cloud feature extraction network is function as filtering noise, outliers and redundant information; The deep RNN network with long and short-term memory (LSTM) ability is used to learn the relationships between the feature points on the high-dimensional feature sequence separated by a certain distance; regression network is used to regress the output from deep RNN to plant height value. Experiments results on the 3rd Greenhouse Growing Challenge datasets show that DRN can directly regress the plant height of a single plant effectively without manual operations and the participation of spatial auxiliary information with an R2 of 0.948 and a relative root mean square error (RRMSE) of 10.06% in four different varieties of lettuce at different growth period. After studying the influence of the weights of the x, y, z coordinate of the input 3D point cloud on the regression result, then, we design a Dimension Attention (DA) module at the front end of the feature extraction network to learning the characteristic coordinate weight for every input point cloud sample. The DRN network with a DA module is called D-DRN, experiment results indicate D-DRN tend to achieve better result (R2 = 0.960; RRMSE = 8.680%) than DRN. Considering the end-to-end-based DRN and D-DRN network capable of ease of integration and their considerable prediction accuracy on public datasets, we believe they has a certain complementary effect on the existing study methods of obtaining plant morphological structure phenotype by point cloud data. |
abstractGer |
Nowadays, 3D point cloud is supposed to be the most direct and effective data form for studying plant morphology structure. However, automatic and high-throughput acquisition of accurate individual plant height traits from 3D point cloud remains an urgent challenging problem. Summarizing the related research results in recent years, the factors limiting its application mainly come from these aspects: (1) Many existing methods require spatial auxiliary information such as ground control points (GCP), digital terrain models (DTM) and digital surface models (DSM) to obtain accurate plant height; (2) For 3D point cloud data in different environments, specialized modeling and careful parameter fine-tuning are usually required; (3) Sometimes, the point cloud processing involves the combined utilization of multiple programming languages and software, which is difficult for system integration. Focusing on these challenges, firstly, we proposed a novel end-to-end deep Recurrent Neural Network (RNN) based regression network framework called DRN, which consists of three parts: point cloud feature extraction network, deep RNN and regression network. The convolution operations-based point cloud feature extraction network is function as filtering noise, outliers and redundant information; The deep RNN network with long and short-term memory (LSTM) ability is used to learn the relationships between the feature points on the high-dimensional feature sequence separated by a certain distance; regression network is used to regress the output from deep RNN to plant height value. Experiments results on the 3rd Greenhouse Growing Challenge datasets show that DRN can directly regress the plant height of a single plant effectively without manual operations and the participation of spatial auxiliary information with an R2 of 0.948 and a relative root mean square error (RRMSE) of 10.06% in four different varieties of lettuce at different growth period. After studying the influence of the weights of the x, y, z coordinate of the input 3D point cloud on the regression result, then, we design a Dimension Attention (DA) module at the front end of the feature extraction network to learning the characteristic coordinate weight for every input point cloud sample. The DRN network with a DA module is called D-DRN, experiment results indicate D-DRN tend to achieve better result (R2 = 0.960; RRMSE = 8.680%) than DRN. Considering the end-to-end-based DRN and D-DRN network capable of ease of integration and their considerable prediction accuracy on public datasets, we believe they has a certain complementary effect on the existing study methods of obtaining plant morphological structure phenotype by point cloud data. |
abstract_unstemmed |
Nowadays, 3D point cloud is supposed to be the most direct and effective data form for studying plant morphology structure. However, automatic and high-throughput acquisition of accurate individual plant height traits from 3D point cloud remains an urgent challenging problem. Summarizing the related research results in recent years, the factors limiting its application mainly come from these aspects: (1) Many existing methods require spatial auxiliary information such as ground control points (GCP), digital terrain models (DTM) and digital surface models (DSM) to obtain accurate plant height; (2) For 3D point cloud data in different environments, specialized modeling and careful parameter fine-tuning are usually required; (3) Sometimes, the point cloud processing involves the combined utilization of multiple programming languages and software, which is difficult for system integration. Focusing on these challenges, firstly, we proposed a novel end-to-end deep Recurrent Neural Network (RNN) based regression network framework called DRN, which consists of three parts: point cloud feature extraction network, deep RNN and regression network. The convolution operations-based point cloud feature extraction network is function as filtering noise, outliers and redundant information; The deep RNN network with long and short-term memory (LSTM) ability is used to learn the relationships between the feature points on the high-dimensional feature sequence separated by a certain distance; regression network is used to regress the output from deep RNN to plant height value. Experiments results on the 3rd Greenhouse Growing Challenge datasets show that DRN can directly regress the plant height of a single plant effectively without manual operations and the participation of spatial auxiliary information with an R2 of 0.948 and a relative root mean square error (RRMSE) of 10.06% in four different varieties of lettuce at different growth period. After studying the influence of the weights of the x, y, z coordinate of the input 3D point cloud on the regression result, then, we design a Dimension Attention (DA) module at the front end of the feature extraction network to learning the characteristic coordinate weight for every input point cloud sample. The DRN network with a DA module is called D-DRN, experiment results indicate D-DRN tend to achieve better result (R2 = 0.960; RRMSE = 8.680%) than DRN. Considering the end-to-end-based DRN and D-DRN network capable of ease of integration and their considerable prediction accuracy on public datasets, we believe they has a certain complementary effect on the existing study methods of obtaining plant morphological structure phenotype by point cloud data. |
collection_details |
GBV_USEFLAG_U SYSFLAG_U GBV_ELV GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 |
title_short |
Towards end-to-end deep RNN based networks to precisely regress of the lettuce plant height by single perspective sparse 3D point cloud |
remote_bool |
true |
author2 |
Wang, Ying Zheng, LiHua Zhang, Man Wang, Minjuan |
author2Str |
Wang, Ying Zheng, LiHua Zhang, Man Wang, Minjuan |
ppnlink |
320577961 |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1016/j.eswa.2023.120497 |
up_date |
2024-07-06T17:43:09.307Z |
_version_ |
1803852488962473984 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV010361359</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230618073033.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230613s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.eswa.2023.120497</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV010361359</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0957-4174(23)00999-5</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.72</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Li, Jingsong</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Towards end-to-end deep RNN based networks to precisely regress of the lettuce plant height by single perspective sparse 3D point cloud</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Nowadays, 3D point cloud is supposed to be the most direct and effective data form for studying plant morphology structure. However, automatic and high-throughput acquisition of accurate individual plant height traits from 3D point cloud remains an urgent challenging problem. Summarizing the related research results in recent years, the factors limiting its application mainly come from these aspects: (1) Many existing methods require spatial auxiliary information such as ground control points (GCP), digital terrain models (DTM) and digital surface models (DSM) to obtain accurate plant height; (2) For 3D point cloud data in different environments, specialized modeling and careful parameter fine-tuning are usually required; (3) Sometimes, the point cloud processing involves the combined utilization of multiple programming languages and software, which is difficult for system integration. Focusing on these challenges, firstly, we proposed a novel end-to-end deep Recurrent Neural Network (RNN) based regression network framework called DRN, which consists of three parts: point cloud feature extraction network, deep RNN and regression network. The convolution operations-based point cloud feature extraction network is function as filtering noise, outliers and redundant information; The deep RNN network with long and short-term memory (LSTM) ability is used to learn the relationships between the feature points on the high-dimensional feature sequence separated by a certain distance; regression network is used to regress the output from deep RNN to plant height value. Experiments results on the 3rd Greenhouse Growing Challenge datasets show that DRN can directly regress the plant height of a single plant effectively without manual operations and the participation of spatial auxiliary information with an R2 of 0.948 and a relative root mean square error (RRMSE) of 10.06% in four different varieties of lettuce at different growth period. After studying the influence of the weights of the x, y, z coordinate of the input 3D point cloud on the regression result, then, we design a Dimension Attention (DA) module at the front end of the feature extraction network to learning the characteristic coordinate weight for every input point cloud sample. The DRN network with a DA module is called D-DRN, experiment results indicate D-DRN tend to achieve better result (R2 = 0.960; RRMSE = 8.680%) than DRN. Considering the end-to-end-based DRN and D-DRN network capable of ease of integration and their considerable prediction accuracy on public datasets, we believe they has a certain complementary effect on the existing study methods of obtaining plant morphological structure phenotype by point cloud data.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">3D Point cloud</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Remote sensing</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Deep RNN</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Plant height</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Regression network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Vegetation structural parameters</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wang, Ying</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zheng, LiHua</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhang, Man</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wang, Minjuan</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-7520-1726</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Expert systems with applications</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier Science, 1990</subfield><subfield code="g">229</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)320577961</subfield><subfield code="w">(DE-600)2017237-0</subfield><subfield code="w">(DE-576)11481807X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:229</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.72</subfield><subfield code="j">Künstliche Intelligenz</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">229</subfield></datafield></record></collection>
|
score |
7.397566 |