Unsupervised domain adaptation using transformers for sugarcane rows and gaps detection
Deep learning represented an impressive advance in the field of machine learning and is continually breaking records in dozens of areas of artificial intelligence, such as image recognition. Nevertheless, the success of these architectures depends on a large amount of labeled data and the annotation...
Ausführliche Beschreibung
Autor*in: |
dos Santos Ferreira, Alessandro [verfasserIn] Junior, José Marcato [verfasserIn] Pistori, Hemerson [verfasserIn] Melgani, Farid [verfasserIn] Gonçalves, Wesley Nunes [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
Enthalten in: Computers and electronics in agriculture - Amsterdam [u.a.] : Elsevier Science, 1985, 203 |
---|---|
Übergeordnetes Werk: |
volume:203 |
DOI / URN: |
10.1016/j.compag.2022.107480 |
---|
Katalog-ID: |
ELV008836574 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV008836574 | ||
003 | DE-627 | ||
005 | 20230524123721.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230509s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.compag.2022.107480 |2 doi | |
035 | |a (DE-627)ELV008836574 | ||
035 | |a (ELSEVIER)S0168-1699(22)00788-8 | ||
040 | |a DE-627 |b ger |c DE-627 |e rda | ||
041 | |a eng | ||
082 | 0 | 4 | |a 620 |a 630 |a 640 |a 004 |q DE-600 |
084 | |a 48.03 |2 bkl | ||
100 | 1 | |a dos Santos Ferreira, Alessandro |e verfasserin |0 (orcid)0000-0002-9588-5889 |4 aut | |
245 | 1 | 0 | |a Unsupervised domain adaptation using transformers for sugarcane rows and gaps detection |
264 | 1 | |c 2022 | |
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Deep learning represented an impressive advance in the field of machine learning and is continually breaking records in dozens of areas of artificial intelligence, such as image recognition. Nevertheless, the success of these architectures depends on a large amount of labeled data and the annotation of training data is a costly process that is often performed manually. The cost of labeling and the difficulty of generalizing the model knowledge to unseen data poses an obstacle to the use of these techniques in real-world agricultural challenges. In this work, we propose an approach to deal with this problem when detecting crop rows and gaps and our findings can be extended to other problems related with few modifications. Our approach proposes to generate approximated segmentation maps from annotated one-pixel-wide lines using dilation. This method speeds up the pixel labeling process and reduces the line detection problem to semantic segmentation. We considered the transformer-based method, SegFormer, and compared it with ConvNet segmentation models, PSPNet and DeepLabV3+, on datasets containing aerial images of four different sugarcane farms. To evaluate the ability to transfer the knowledge learned from source datasets to target datasets, we used a very recent and current state-of-the-state unsupervised domain adaptation (UDA) model, DAFormer, which has achieved great results in adapting knowledge from synthetic data to real data. In this work, we were able to evaluate its performance using only real-world images from different but related domains. Even without using domain adaptation, the Transformer-based model, SegFormer, performed significantly better than ConvNets for unseen data, but when applying UDA using DAFormer, the results were even better, reaching from 71.1% to 94.5% relative performance regarding the average F1-score achieved when using supervised training with labeled data. | ||
650 | 4 | |a Unsupervised domain adaptation | |
650 | 4 | |a Vision transformer | |
650 | 4 | |a Crop rows detection | |
700 | 1 | |a Junior, José Marcato |e verfasserin |4 aut | |
700 | 1 | |a Pistori, Hemerson |e verfasserin |4 aut | |
700 | 1 | |a Melgani, Farid |e verfasserin |4 aut | |
700 | 1 | |a Gonçalves, Wesley Nunes |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Computers and electronics in agriculture |d Amsterdam [u.a.] : Elsevier Science, 1985 |g 203 |h Online-Ressource |w (DE-627)320567826 |w (DE-600)2016151-7 |w (DE-576)090955684 |x 1872-7107 |7 nnns |
773 | 1 | 8 | |g volume:203 |
912 | |a GBV_USEFLAG_U | ||
912 | |a SYSFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SSG-OPC-FOR | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_101 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_150 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2038 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2065 | ||
912 | |a GBV_ILN_2068 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2113 | ||
912 | |a GBV_ILN_2118 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2147 | ||
912 | |a GBV_ILN_2148 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2190 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_2522 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
936 | b | k | |a 48.03 |j Methoden und Techniken der Land- und Forstwirtschaft |
951 | |a AR | ||
952 | |d 203 |
author_variant |
s f a d sfa sfad j m j jm jmj h p hp f m fm w n g wn wng |
---|---|
matchkey_str |
article:18727107:2022----::nuevsdoandpainsntasomrfruacn |
hierarchy_sort_str |
2022 |
bklnumber |
48.03 |
publishDate |
2022 |
allfields |
10.1016/j.compag.2022.107480 doi (DE-627)ELV008836574 (ELSEVIER)S0168-1699(22)00788-8 DE-627 ger DE-627 rda eng 620 630 640 004 DE-600 48.03 bkl dos Santos Ferreira, Alessandro verfasserin (orcid)0000-0002-9588-5889 aut Unsupervised domain adaptation using transformers for sugarcane rows and gaps detection 2022 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Deep learning represented an impressive advance in the field of machine learning and is continually breaking records in dozens of areas of artificial intelligence, such as image recognition. Nevertheless, the success of these architectures depends on a large amount of labeled data and the annotation of training data is a costly process that is often performed manually. The cost of labeling and the difficulty of generalizing the model knowledge to unseen data poses an obstacle to the use of these techniques in real-world agricultural challenges. In this work, we propose an approach to deal with this problem when detecting crop rows and gaps and our findings can be extended to other problems related with few modifications. Our approach proposes to generate approximated segmentation maps from annotated one-pixel-wide lines using dilation. This method speeds up the pixel labeling process and reduces the line detection problem to semantic segmentation. We considered the transformer-based method, SegFormer, and compared it with ConvNet segmentation models, PSPNet and DeepLabV3+, on datasets containing aerial images of four different sugarcane farms. To evaluate the ability to transfer the knowledge learned from source datasets to target datasets, we used a very recent and current state-of-the-state unsupervised domain adaptation (UDA) model, DAFormer, which has achieved great results in adapting knowledge from synthetic data to real data. In this work, we were able to evaluate its performance using only real-world images from different but related domains. Even without using domain adaptation, the Transformer-based model, SegFormer, performed significantly better than ConvNets for unseen data, but when applying UDA using DAFormer, the results were even better, reaching from 71.1% to 94.5% relative performance regarding the average F1-score achieved when using supervised training with labeled data. Unsupervised domain adaptation Vision transformer Crop rows detection Junior, José Marcato verfasserin aut Pistori, Hemerson verfasserin aut Melgani, Farid verfasserin aut Gonçalves, Wesley Nunes verfasserin aut Enthalten in Computers and electronics in agriculture Amsterdam [u.a.] : Elsevier Science, 1985 203 Online-Ressource (DE-627)320567826 (DE-600)2016151-7 (DE-576)090955684 1872-7107 nnns volume:203 GBV_USEFLAG_U SYSFLAG_U GBV_ELV SSG-OPC-FOR GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 48.03 Methoden und Techniken der Land- und Forstwirtschaft AR 203 |
spelling |
10.1016/j.compag.2022.107480 doi (DE-627)ELV008836574 (ELSEVIER)S0168-1699(22)00788-8 DE-627 ger DE-627 rda eng 620 630 640 004 DE-600 48.03 bkl dos Santos Ferreira, Alessandro verfasserin (orcid)0000-0002-9588-5889 aut Unsupervised domain adaptation using transformers for sugarcane rows and gaps detection 2022 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Deep learning represented an impressive advance in the field of machine learning and is continually breaking records in dozens of areas of artificial intelligence, such as image recognition. Nevertheless, the success of these architectures depends on a large amount of labeled data and the annotation of training data is a costly process that is often performed manually. The cost of labeling and the difficulty of generalizing the model knowledge to unseen data poses an obstacle to the use of these techniques in real-world agricultural challenges. In this work, we propose an approach to deal with this problem when detecting crop rows and gaps and our findings can be extended to other problems related with few modifications. Our approach proposes to generate approximated segmentation maps from annotated one-pixel-wide lines using dilation. This method speeds up the pixel labeling process and reduces the line detection problem to semantic segmentation. We considered the transformer-based method, SegFormer, and compared it with ConvNet segmentation models, PSPNet and DeepLabV3+, on datasets containing aerial images of four different sugarcane farms. To evaluate the ability to transfer the knowledge learned from source datasets to target datasets, we used a very recent and current state-of-the-state unsupervised domain adaptation (UDA) model, DAFormer, which has achieved great results in adapting knowledge from synthetic data to real data. In this work, we were able to evaluate its performance using only real-world images from different but related domains. Even without using domain adaptation, the Transformer-based model, SegFormer, performed significantly better than ConvNets for unseen data, but when applying UDA using DAFormer, the results were even better, reaching from 71.1% to 94.5% relative performance regarding the average F1-score achieved when using supervised training with labeled data. Unsupervised domain adaptation Vision transformer Crop rows detection Junior, José Marcato verfasserin aut Pistori, Hemerson verfasserin aut Melgani, Farid verfasserin aut Gonçalves, Wesley Nunes verfasserin aut Enthalten in Computers and electronics in agriculture Amsterdam [u.a.] : Elsevier Science, 1985 203 Online-Ressource (DE-627)320567826 (DE-600)2016151-7 (DE-576)090955684 1872-7107 nnns volume:203 GBV_USEFLAG_U SYSFLAG_U GBV_ELV SSG-OPC-FOR GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 48.03 Methoden und Techniken der Land- und Forstwirtschaft AR 203 |
allfields_unstemmed |
10.1016/j.compag.2022.107480 doi (DE-627)ELV008836574 (ELSEVIER)S0168-1699(22)00788-8 DE-627 ger DE-627 rda eng 620 630 640 004 DE-600 48.03 bkl dos Santos Ferreira, Alessandro verfasserin (orcid)0000-0002-9588-5889 aut Unsupervised domain adaptation using transformers for sugarcane rows and gaps detection 2022 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Deep learning represented an impressive advance in the field of machine learning and is continually breaking records in dozens of areas of artificial intelligence, such as image recognition. Nevertheless, the success of these architectures depends on a large amount of labeled data and the annotation of training data is a costly process that is often performed manually. The cost of labeling and the difficulty of generalizing the model knowledge to unseen data poses an obstacle to the use of these techniques in real-world agricultural challenges. In this work, we propose an approach to deal with this problem when detecting crop rows and gaps and our findings can be extended to other problems related with few modifications. Our approach proposes to generate approximated segmentation maps from annotated one-pixel-wide lines using dilation. This method speeds up the pixel labeling process and reduces the line detection problem to semantic segmentation. We considered the transformer-based method, SegFormer, and compared it with ConvNet segmentation models, PSPNet and DeepLabV3+, on datasets containing aerial images of four different sugarcane farms. To evaluate the ability to transfer the knowledge learned from source datasets to target datasets, we used a very recent and current state-of-the-state unsupervised domain adaptation (UDA) model, DAFormer, which has achieved great results in adapting knowledge from synthetic data to real data. In this work, we were able to evaluate its performance using only real-world images from different but related domains. Even without using domain adaptation, the Transformer-based model, SegFormer, performed significantly better than ConvNets for unseen data, but when applying UDA using DAFormer, the results were even better, reaching from 71.1% to 94.5% relative performance regarding the average F1-score achieved when using supervised training with labeled data. Unsupervised domain adaptation Vision transformer Crop rows detection Junior, José Marcato verfasserin aut Pistori, Hemerson verfasserin aut Melgani, Farid verfasserin aut Gonçalves, Wesley Nunes verfasserin aut Enthalten in Computers and electronics in agriculture Amsterdam [u.a.] : Elsevier Science, 1985 203 Online-Ressource (DE-627)320567826 (DE-600)2016151-7 (DE-576)090955684 1872-7107 nnns volume:203 GBV_USEFLAG_U SYSFLAG_U GBV_ELV SSG-OPC-FOR GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 48.03 Methoden und Techniken der Land- und Forstwirtschaft AR 203 |
allfieldsGer |
10.1016/j.compag.2022.107480 doi (DE-627)ELV008836574 (ELSEVIER)S0168-1699(22)00788-8 DE-627 ger DE-627 rda eng 620 630 640 004 DE-600 48.03 bkl dos Santos Ferreira, Alessandro verfasserin (orcid)0000-0002-9588-5889 aut Unsupervised domain adaptation using transformers for sugarcane rows and gaps detection 2022 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Deep learning represented an impressive advance in the field of machine learning and is continually breaking records in dozens of areas of artificial intelligence, such as image recognition. Nevertheless, the success of these architectures depends on a large amount of labeled data and the annotation of training data is a costly process that is often performed manually. The cost of labeling and the difficulty of generalizing the model knowledge to unseen data poses an obstacle to the use of these techniques in real-world agricultural challenges. In this work, we propose an approach to deal with this problem when detecting crop rows and gaps and our findings can be extended to other problems related with few modifications. Our approach proposes to generate approximated segmentation maps from annotated one-pixel-wide lines using dilation. This method speeds up the pixel labeling process and reduces the line detection problem to semantic segmentation. We considered the transformer-based method, SegFormer, and compared it with ConvNet segmentation models, PSPNet and DeepLabV3+, on datasets containing aerial images of four different sugarcane farms. To evaluate the ability to transfer the knowledge learned from source datasets to target datasets, we used a very recent and current state-of-the-state unsupervised domain adaptation (UDA) model, DAFormer, which has achieved great results in adapting knowledge from synthetic data to real data. In this work, we were able to evaluate its performance using only real-world images from different but related domains. Even without using domain adaptation, the Transformer-based model, SegFormer, performed significantly better than ConvNets for unseen data, but when applying UDA using DAFormer, the results were even better, reaching from 71.1% to 94.5% relative performance regarding the average F1-score achieved when using supervised training with labeled data. Unsupervised domain adaptation Vision transformer Crop rows detection Junior, José Marcato verfasserin aut Pistori, Hemerson verfasserin aut Melgani, Farid verfasserin aut Gonçalves, Wesley Nunes verfasserin aut Enthalten in Computers and electronics in agriculture Amsterdam [u.a.] : Elsevier Science, 1985 203 Online-Ressource (DE-627)320567826 (DE-600)2016151-7 (DE-576)090955684 1872-7107 nnns volume:203 GBV_USEFLAG_U SYSFLAG_U GBV_ELV SSG-OPC-FOR GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 48.03 Methoden und Techniken der Land- und Forstwirtschaft AR 203 |
allfieldsSound |
10.1016/j.compag.2022.107480 doi (DE-627)ELV008836574 (ELSEVIER)S0168-1699(22)00788-8 DE-627 ger DE-627 rda eng 620 630 640 004 DE-600 48.03 bkl dos Santos Ferreira, Alessandro verfasserin (orcid)0000-0002-9588-5889 aut Unsupervised domain adaptation using transformers for sugarcane rows and gaps detection 2022 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Deep learning represented an impressive advance in the field of machine learning and is continually breaking records in dozens of areas of artificial intelligence, such as image recognition. Nevertheless, the success of these architectures depends on a large amount of labeled data and the annotation of training data is a costly process that is often performed manually. The cost of labeling and the difficulty of generalizing the model knowledge to unseen data poses an obstacle to the use of these techniques in real-world agricultural challenges. In this work, we propose an approach to deal with this problem when detecting crop rows and gaps and our findings can be extended to other problems related with few modifications. Our approach proposes to generate approximated segmentation maps from annotated one-pixel-wide lines using dilation. This method speeds up the pixel labeling process and reduces the line detection problem to semantic segmentation. We considered the transformer-based method, SegFormer, and compared it with ConvNet segmentation models, PSPNet and DeepLabV3+, on datasets containing aerial images of four different sugarcane farms. To evaluate the ability to transfer the knowledge learned from source datasets to target datasets, we used a very recent and current state-of-the-state unsupervised domain adaptation (UDA) model, DAFormer, which has achieved great results in adapting knowledge from synthetic data to real data. In this work, we were able to evaluate its performance using only real-world images from different but related domains. Even without using domain adaptation, the Transformer-based model, SegFormer, performed significantly better than ConvNets for unseen data, but when applying UDA using DAFormer, the results were even better, reaching from 71.1% to 94.5% relative performance regarding the average F1-score achieved when using supervised training with labeled data. Unsupervised domain adaptation Vision transformer Crop rows detection Junior, José Marcato verfasserin aut Pistori, Hemerson verfasserin aut Melgani, Farid verfasserin aut Gonçalves, Wesley Nunes verfasserin aut Enthalten in Computers and electronics in agriculture Amsterdam [u.a.] : Elsevier Science, 1985 203 Online-Ressource (DE-627)320567826 (DE-600)2016151-7 (DE-576)090955684 1872-7107 nnns volume:203 GBV_USEFLAG_U SYSFLAG_U GBV_ELV SSG-OPC-FOR GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 48.03 Methoden und Techniken der Land- und Forstwirtschaft AR 203 |
language |
English |
source |
Enthalten in Computers and electronics in agriculture 203 volume:203 |
sourceStr |
Enthalten in Computers and electronics in agriculture 203 volume:203 |
format_phy_str_mv |
Article |
bklname |
Methoden und Techniken der Land- und Forstwirtschaft |
institution |
findex.gbv.de |
topic_facet |
Unsupervised domain adaptation Vision transformer Crop rows detection |
dewey-raw |
620 |
isfreeaccess_bool |
false |
container_title |
Computers and electronics in agriculture |
authorswithroles_txt_mv |
dos Santos Ferreira, Alessandro @@aut@@ Junior, José Marcato @@aut@@ Pistori, Hemerson @@aut@@ Melgani, Farid @@aut@@ Gonçalves, Wesley Nunes @@aut@@ |
publishDateDaySort_date |
2022-01-01T00:00:00Z |
hierarchy_top_id |
320567826 |
dewey-sort |
3620 |
id |
ELV008836574 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV008836574</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230524123721.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230509s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.compag.2022.107480</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV008836574</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0168-1699(22)00788-8</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">620</subfield><subfield code="a">630</subfield><subfield code="a">640</subfield><subfield code="a">004</subfield><subfield code="q">DE-600</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">48.03</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">dos Santos Ferreira, Alessandro</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-9588-5889</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Unsupervised domain adaptation using transformers for sugarcane rows and gaps detection</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Deep learning represented an impressive advance in the field of machine learning and is continually breaking records in dozens of areas of artificial intelligence, such as image recognition. Nevertheless, the success of these architectures depends on a large amount of labeled data and the annotation of training data is a costly process that is often performed manually. The cost of labeling and the difficulty of generalizing the model knowledge to unseen data poses an obstacle to the use of these techniques in real-world agricultural challenges. In this work, we propose an approach to deal with this problem when detecting crop rows and gaps and our findings can be extended to other problems related with few modifications. Our approach proposes to generate approximated segmentation maps from annotated one-pixel-wide lines using dilation. This method speeds up the pixel labeling process and reduces the line detection problem to semantic segmentation. We considered the transformer-based method, SegFormer, and compared it with ConvNet segmentation models, PSPNet and DeepLabV3+, on datasets containing aerial images of four different sugarcane farms. To evaluate the ability to transfer the knowledge learned from source datasets to target datasets, we used a very recent and current state-of-the-state unsupervised domain adaptation (UDA) model, DAFormer, which has achieved great results in adapting knowledge from synthetic data to real data. In this work, we were able to evaluate its performance using only real-world images from different but related domains. Even without using domain adaptation, the Transformer-based model, SegFormer, performed significantly better than ConvNets for unseen data, but when applying UDA using DAFormer, the results were even better, reaching from 71.1% to 94.5% relative performance regarding the average F1-score achieved when using supervised training with labeled data.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Unsupervised domain adaptation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Vision transformer</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Crop rows detection</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Junior, José Marcato</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Pistori, Hemerson</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Melgani, Farid</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Gonçalves, Wesley Nunes</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Computers and electronics in agriculture</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier Science, 1985</subfield><subfield code="g">203</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)320567826</subfield><subfield code="w">(DE-600)2016151-7</subfield><subfield code="w">(DE-576)090955684</subfield><subfield code="x">1872-7107</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:203</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OPC-FOR</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">48.03</subfield><subfield code="j">Methoden und Techniken der Land- und Forstwirtschaft</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">203</subfield></datafield></record></collection>
|
author |
dos Santos Ferreira, Alessandro |
spellingShingle |
dos Santos Ferreira, Alessandro ddc 620 bkl 48.03 misc Unsupervised domain adaptation misc Vision transformer misc Crop rows detection Unsupervised domain adaptation using transformers for sugarcane rows and gaps detection |
authorStr |
dos Santos Ferreira, Alessandro |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)320567826 |
format |
electronic Article |
dewey-ones |
620 - Engineering & allied operations 630 - Agriculture & related technologies 640 - Home & family management 004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
1872-7107 |
topic_title |
620 630 640 004 DE-600 48.03 bkl Unsupervised domain adaptation using transformers for sugarcane rows and gaps detection Unsupervised domain adaptation Vision transformer Crop rows detection |
topic |
ddc 620 bkl 48.03 misc Unsupervised domain adaptation misc Vision transformer misc Crop rows detection |
topic_unstemmed |
ddc 620 bkl 48.03 misc Unsupervised domain adaptation misc Vision transformer misc Crop rows detection |
topic_browse |
ddc 620 bkl 48.03 misc Unsupervised domain adaptation misc Vision transformer misc Crop rows detection |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Computers and electronics in agriculture |
hierarchy_parent_id |
320567826 |
dewey-tens |
620 - Engineering 630 - Agriculture 640 - Home & family management 000 - Computer science, knowledge & systems |
hierarchy_top_title |
Computers and electronics in agriculture |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)320567826 (DE-600)2016151-7 (DE-576)090955684 |
title |
Unsupervised domain adaptation using transformers for sugarcane rows and gaps detection |
ctrlnum |
(DE-627)ELV008836574 (ELSEVIER)S0168-1699(22)00788-8 |
title_full |
Unsupervised domain adaptation using transformers for sugarcane rows and gaps detection |
author_sort |
dos Santos Ferreira, Alessandro |
journal |
Computers and electronics in agriculture |
journalStr |
Computers and electronics in agriculture |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
600 - Technology 000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
zzz |
author_browse |
dos Santos Ferreira, Alessandro Junior, José Marcato Pistori, Hemerson Melgani, Farid Gonçalves, Wesley Nunes |
container_volume |
203 |
class |
620 630 640 004 DE-600 48.03 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
dos Santos Ferreira, Alessandro |
doi_str_mv |
10.1016/j.compag.2022.107480 |
normlink |
(ORCID)0000-0002-9588-5889 |
normlink_prefix_str_mv |
(orcid)0000-0002-9588-5889 |
dewey-full |
620 630 640 004 |
author2-role |
verfasserin |
title_sort |
unsupervised domain adaptation using transformers for sugarcane rows and gaps detection |
title_auth |
Unsupervised domain adaptation using transformers for sugarcane rows and gaps detection |
abstract |
Deep learning represented an impressive advance in the field of machine learning and is continually breaking records in dozens of areas of artificial intelligence, such as image recognition. Nevertheless, the success of these architectures depends on a large amount of labeled data and the annotation of training data is a costly process that is often performed manually. The cost of labeling and the difficulty of generalizing the model knowledge to unseen data poses an obstacle to the use of these techniques in real-world agricultural challenges. In this work, we propose an approach to deal with this problem when detecting crop rows and gaps and our findings can be extended to other problems related with few modifications. Our approach proposes to generate approximated segmentation maps from annotated one-pixel-wide lines using dilation. This method speeds up the pixel labeling process and reduces the line detection problem to semantic segmentation. We considered the transformer-based method, SegFormer, and compared it with ConvNet segmentation models, PSPNet and DeepLabV3+, on datasets containing aerial images of four different sugarcane farms. To evaluate the ability to transfer the knowledge learned from source datasets to target datasets, we used a very recent and current state-of-the-state unsupervised domain adaptation (UDA) model, DAFormer, which has achieved great results in adapting knowledge from synthetic data to real data. In this work, we were able to evaluate its performance using only real-world images from different but related domains. Even without using domain adaptation, the Transformer-based model, SegFormer, performed significantly better than ConvNets for unseen data, but when applying UDA using DAFormer, the results were even better, reaching from 71.1% to 94.5% relative performance regarding the average F1-score achieved when using supervised training with labeled data. |
abstractGer |
Deep learning represented an impressive advance in the field of machine learning and is continually breaking records in dozens of areas of artificial intelligence, such as image recognition. Nevertheless, the success of these architectures depends on a large amount of labeled data and the annotation of training data is a costly process that is often performed manually. The cost of labeling and the difficulty of generalizing the model knowledge to unseen data poses an obstacle to the use of these techniques in real-world agricultural challenges. In this work, we propose an approach to deal with this problem when detecting crop rows and gaps and our findings can be extended to other problems related with few modifications. Our approach proposes to generate approximated segmentation maps from annotated one-pixel-wide lines using dilation. This method speeds up the pixel labeling process and reduces the line detection problem to semantic segmentation. We considered the transformer-based method, SegFormer, and compared it with ConvNet segmentation models, PSPNet and DeepLabV3+, on datasets containing aerial images of four different sugarcane farms. To evaluate the ability to transfer the knowledge learned from source datasets to target datasets, we used a very recent and current state-of-the-state unsupervised domain adaptation (UDA) model, DAFormer, which has achieved great results in adapting knowledge from synthetic data to real data. In this work, we were able to evaluate its performance using only real-world images from different but related domains. Even without using domain adaptation, the Transformer-based model, SegFormer, performed significantly better than ConvNets for unseen data, but when applying UDA using DAFormer, the results were even better, reaching from 71.1% to 94.5% relative performance regarding the average F1-score achieved when using supervised training with labeled data. |
abstract_unstemmed |
Deep learning represented an impressive advance in the field of machine learning and is continually breaking records in dozens of areas of artificial intelligence, such as image recognition. Nevertheless, the success of these architectures depends on a large amount of labeled data and the annotation of training data is a costly process that is often performed manually. The cost of labeling and the difficulty of generalizing the model knowledge to unseen data poses an obstacle to the use of these techniques in real-world agricultural challenges. In this work, we propose an approach to deal with this problem when detecting crop rows and gaps and our findings can be extended to other problems related with few modifications. Our approach proposes to generate approximated segmentation maps from annotated one-pixel-wide lines using dilation. This method speeds up the pixel labeling process and reduces the line detection problem to semantic segmentation. We considered the transformer-based method, SegFormer, and compared it with ConvNet segmentation models, PSPNet and DeepLabV3+, on datasets containing aerial images of four different sugarcane farms. To evaluate the ability to transfer the knowledge learned from source datasets to target datasets, we used a very recent and current state-of-the-state unsupervised domain adaptation (UDA) model, DAFormer, which has achieved great results in adapting knowledge from synthetic data to real data. In this work, we were able to evaluate its performance using only real-world images from different but related domains. Even without using domain adaptation, the Transformer-based model, SegFormer, performed significantly better than ConvNets for unseen data, but when applying UDA using DAFormer, the results were even better, reaching from 71.1% to 94.5% relative performance regarding the average F1-score achieved when using supervised training with labeled data. |
collection_details |
GBV_USEFLAG_U SYSFLAG_U GBV_ELV SSG-OPC-FOR GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_224 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2038 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2336 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4313 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4393 |
title_short |
Unsupervised domain adaptation using transformers for sugarcane rows and gaps detection |
remote_bool |
true |
author2 |
Junior, José Marcato Pistori, Hemerson Melgani, Farid Gonçalves, Wesley Nunes |
author2Str |
Junior, José Marcato Pistori, Hemerson Melgani, Farid Gonçalves, Wesley Nunes |
ppnlink |
320567826 |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1016/j.compag.2022.107480 |
up_date |
2024-07-06T21:04:19.192Z |
_version_ |
1803865145153159168 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV008836574</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230524123721.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230509s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.compag.2022.107480</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV008836574</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0168-1699(22)00788-8</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">620</subfield><subfield code="a">630</subfield><subfield code="a">640</subfield><subfield code="a">004</subfield><subfield code="q">DE-600</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">48.03</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">dos Santos Ferreira, Alessandro</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-9588-5889</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Unsupervised domain adaptation using transformers for sugarcane rows and gaps detection</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Deep learning represented an impressive advance in the field of machine learning and is continually breaking records in dozens of areas of artificial intelligence, such as image recognition. Nevertheless, the success of these architectures depends on a large amount of labeled data and the annotation of training data is a costly process that is often performed manually. The cost of labeling and the difficulty of generalizing the model knowledge to unseen data poses an obstacle to the use of these techniques in real-world agricultural challenges. In this work, we propose an approach to deal with this problem when detecting crop rows and gaps and our findings can be extended to other problems related with few modifications. Our approach proposes to generate approximated segmentation maps from annotated one-pixel-wide lines using dilation. This method speeds up the pixel labeling process and reduces the line detection problem to semantic segmentation. We considered the transformer-based method, SegFormer, and compared it with ConvNet segmentation models, PSPNet and DeepLabV3+, on datasets containing aerial images of four different sugarcane farms. To evaluate the ability to transfer the knowledge learned from source datasets to target datasets, we used a very recent and current state-of-the-state unsupervised domain adaptation (UDA) model, DAFormer, which has achieved great results in adapting knowledge from synthetic data to real data. In this work, we were able to evaluate its performance using only real-world images from different but related domains. Even without using domain adaptation, the Transformer-based model, SegFormer, performed significantly better than ConvNets for unseen data, but when applying UDA using DAFormer, the results were even better, reaching from 71.1% to 94.5% relative performance regarding the average F1-score achieved when using supervised training with labeled data.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Unsupervised domain adaptation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Vision transformer</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Crop rows detection</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Junior, José Marcato</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Pistori, Hemerson</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Melgani, Farid</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Gonçalves, Wesley Nunes</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Computers and electronics in agriculture</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier Science, 1985</subfield><subfield code="g">203</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)320567826</subfield><subfield code="w">(DE-600)2016151-7</subfield><subfield code="w">(DE-576)090955684</subfield><subfield code="x">1872-7107</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:203</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OPC-FOR</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">48.03</subfield><subfield code="j">Methoden und Techniken der Land- und Forstwirtschaft</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">203</subfield></datafield></record></collection>
|
score |
7.401268 |