Pyramid multi-loss vision transformer for thyroid cancer classification using cytological smear
Multi-instance learning, a commonly used technique in artificial intelligence for analyzing slides, can be applied to diagnose thyroid cancer based on cytological smears. Since smears do not have multidimensional histological features similar to histopathology, mining potential contextual informatio...
Ausführliche Beschreibung
Autor*in: |
Yu, Bo [verfasserIn] Yin, Peng [verfasserIn] Chen, Hechang [verfasserIn] Wang, Yifei [verfasserIn] Zhao, Yu [verfasserIn] Cong, Xianling [verfasserIn] Dijkstra, Jouke [verfasserIn] Cong, Lele [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2023 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
Enthalten in: Knowledge-based systems - Amsterdam [u.a.] : Elsevier Science, 1987, 275 |
---|---|
Übergeordnetes Werk: |
volume:275 |
DOI / URN: |
10.1016/j.knosys.2023.110721 |
---|
Katalog-ID: |
ELV060272767 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV060272767 | ||
003 | DE-627 | ||
005 | 20231005073315.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230709s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.knosys.2023.110721 |2 doi | |
035 | |a (DE-627)ELV060272767 | ||
035 | |a (ELSEVIER)S0950-7051(23)00471-9 | ||
040 | |a DE-627 |b ger |c DE-627 |e rda | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |q VZ |
084 | |a 54.72 |2 bkl | ||
100 | 1 | |a Yu, Bo |e verfasserin |4 aut | |
245 | 1 | 0 | |a Pyramid multi-loss vision transformer for thyroid cancer classification using cytological smear |
264 | 1 | |c 2023 | |
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Multi-instance learning, a commonly used technique in artificial intelligence for analyzing slides, can be applied to diagnose thyroid cancer based on cytological smears. Since smears do not have multidimensional histological features similar to histopathology, mining potential contextual information and diversity of features is crucial for better classification performance. In this paper, we propose a pyramid multi-loss vision transformer model called PyMLViT, a novel algorithm with two core modules to address these issues. Specifically, we design a pyramid token extraction module to acquire potential contextual information on smears. The pyramid token structure extracts multi-scale local features, and the vision transformer structure further obtains global information through the self-attention mechanism. Furthermore, we construct multi-loss fusion module based on the conventional multi-instance learning framework. With carefully designed bag and patch weight allocation strategies, we incorporate slide-level annotations as pseudo-labels for patches to participate in training, thus enhancing the diversity of supervised information. Extensive experimental results on the real-world dataset show that PyMLViT has a high performance and a competitive number of parameters compared to popular methods for diagnosing thyroid cancer in cytological smears. | ||
650 | 4 | |a Pyramid feature | |
650 | 4 | |a Cytological smear | |
650 | 4 | |a Vision transformer | |
650 | 4 | |a Multiple instance learning | |
650 | 4 | |a Thyroid cancer | |
700 | 1 | |a Yin, Peng |e verfasserin |4 aut | |
700 | 1 | |a Chen, Hechang |e verfasserin |0 (orcid)0000-0001-7835-9556 |4 aut | |
700 | 1 | |a Wang, Yifei |e verfasserin |4 aut | |
700 | 1 | |a Zhao, Yu |e verfasserin |4 aut | |
700 | 1 | |a Cong, Xianling |e verfasserin |4 aut | |
700 | 1 | |a Dijkstra, Jouke |e verfasserin |0 (orcid)0000-0002-8666-3731 |4 aut | |
700 | 1 | |a Cong, Lele |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Knowledge-based systems |d Amsterdam [u.a.] : Elsevier Science, 1987 |g 275 |h Online-Ressource |w (DE-627)320580024 |w (DE-600)2017495-0 |w (DE-576)253018722 |x 0950-7051 |7 nnns |
773 | 1 | 8 | |g volume:275 |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SYSFLAG_U | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_101 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_150 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_187 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2001 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2008 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2190 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
912 | |a GBV_ILN_4700 | ||
936 | b | k | |a 54.72 |j Künstliche Intelligenz |q VZ |
951 | |a AR | ||
952 | |d 275 |
author_variant |
b y by p y py h c hc y w yw y z yz x c xc j d jd l c lc |
---|---|
matchkey_str |
article:09507051:2023----::yaimliosiinrnfrefrhricnecasfcto |
hierarchy_sort_str |
2023 |
bklnumber |
54.72 |
publishDate |
2023 |
allfields |
10.1016/j.knosys.2023.110721 doi (DE-627)ELV060272767 (ELSEVIER)S0950-7051(23)00471-9 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Yu, Bo verfasserin aut Pyramid multi-loss vision transformer for thyroid cancer classification using cytological smear 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Multi-instance learning, a commonly used technique in artificial intelligence for analyzing slides, can be applied to diagnose thyroid cancer based on cytological smears. Since smears do not have multidimensional histological features similar to histopathology, mining potential contextual information and diversity of features is crucial for better classification performance. In this paper, we propose a pyramid multi-loss vision transformer model called PyMLViT, a novel algorithm with two core modules to address these issues. Specifically, we design a pyramid token extraction module to acquire potential contextual information on smears. The pyramid token structure extracts multi-scale local features, and the vision transformer structure further obtains global information through the self-attention mechanism. Furthermore, we construct multi-loss fusion module based on the conventional multi-instance learning framework. With carefully designed bag and patch weight allocation strategies, we incorporate slide-level annotations as pseudo-labels for patches to participate in training, thus enhancing the diversity of supervised information. Extensive experimental results on the real-world dataset show that PyMLViT has a high performance and a competitive number of parameters compared to popular methods for diagnosing thyroid cancer in cytological smears. Pyramid feature Cytological smear Vision transformer Multiple instance learning Thyroid cancer Yin, Peng verfasserin aut Chen, Hechang verfasserin (orcid)0000-0001-7835-9556 aut Wang, Yifei verfasserin aut Zhao, Yu verfasserin aut Cong, Xianling verfasserin aut Dijkstra, Jouke verfasserin (orcid)0000-0002-8666-3731 aut Cong, Lele verfasserin aut Enthalten in Knowledge-based systems Amsterdam [u.a.] : Elsevier Science, 1987 275 Online-Ressource (DE-627)320580024 (DE-600)2017495-0 (DE-576)253018722 0950-7051 nnns volume:275 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 275 |
spelling |
10.1016/j.knosys.2023.110721 doi (DE-627)ELV060272767 (ELSEVIER)S0950-7051(23)00471-9 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Yu, Bo verfasserin aut Pyramid multi-loss vision transformer for thyroid cancer classification using cytological smear 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Multi-instance learning, a commonly used technique in artificial intelligence for analyzing slides, can be applied to diagnose thyroid cancer based on cytological smears. Since smears do not have multidimensional histological features similar to histopathology, mining potential contextual information and diversity of features is crucial for better classification performance. In this paper, we propose a pyramid multi-loss vision transformer model called PyMLViT, a novel algorithm with two core modules to address these issues. Specifically, we design a pyramid token extraction module to acquire potential contextual information on smears. The pyramid token structure extracts multi-scale local features, and the vision transformer structure further obtains global information through the self-attention mechanism. Furthermore, we construct multi-loss fusion module based on the conventional multi-instance learning framework. With carefully designed bag and patch weight allocation strategies, we incorporate slide-level annotations as pseudo-labels for patches to participate in training, thus enhancing the diversity of supervised information. Extensive experimental results on the real-world dataset show that PyMLViT has a high performance and a competitive number of parameters compared to popular methods for diagnosing thyroid cancer in cytological smears. Pyramid feature Cytological smear Vision transformer Multiple instance learning Thyroid cancer Yin, Peng verfasserin aut Chen, Hechang verfasserin (orcid)0000-0001-7835-9556 aut Wang, Yifei verfasserin aut Zhao, Yu verfasserin aut Cong, Xianling verfasserin aut Dijkstra, Jouke verfasserin (orcid)0000-0002-8666-3731 aut Cong, Lele verfasserin aut Enthalten in Knowledge-based systems Amsterdam [u.a.] : Elsevier Science, 1987 275 Online-Ressource (DE-627)320580024 (DE-600)2017495-0 (DE-576)253018722 0950-7051 nnns volume:275 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 275 |
allfields_unstemmed |
10.1016/j.knosys.2023.110721 doi (DE-627)ELV060272767 (ELSEVIER)S0950-7051(23)00471-9 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Yu, Bo verfasserin aut Pyramid multi-loss vision transformer for thyroid cancer classification using cytological smear 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Multi-instance learning, a commonly used technique in artificial intelligence for analyzing slides, can be applied to diagnose thyroid cancer based on cytological smears. Since smears do not have multidimensional histological features similar to histopathology, mining potential contextual information and diversity of features is crucial for better classification performance. In this paper, we propose a pyramid multi-loss vision transformer model called PyMLViT, a novel algorithm with two core modules to address these issues. Specifically, we design a pyramid token extraction module to acquire potential contextual information on smears. The pyramid token structure extracts multi-scale local features, and the vision transformer structure further obtains global information through the self-attention mechanism. Furthermore, we construct multi-loss fusion module based on the conventional multi-instance learning framework. With carefully designed bag and patch weight allocation strategies, we incorporate slide-level annotations as pseudo-labels for patches to participate in training, thus enhancing the diversity of supervised information. Extensive experimental results on the real-world dataset show that PyMLViT has a high performance and a competitive number of parameters compared to popular methods for diagnosing thyroid cancer in cytological smears. Pyramid feature Cytological smear Vision transformer Multiple instance learning Thyroid cancer Yin, Peng verfasserin aut Chen, Hechang verfasserin (orcid)0000-0001-7835-9556 aut Wang, Yifei verfasserin aut Zhao, Yu verfasserin aut Cong, Xianling verfasserin aut Dijkstra, Jouke verfasserin (orcid)0000-0002-8666-3731 aut Cong, Lele verfasserin aut Enthalten in Knowledge-based systems Amsterdam [u.a.] : Elsevier Science, 1987 275 Online-Ressource (DE-627)320580024 (DE-600)2017495-0 (DE-576)253018722 0950-7051 nnns volume:275 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 275 |
allfieldsGer |
10.1016/j.knosys.2023.110721 doi (DE-627)ELV060272767 (ELSEVIER)S0950-7051(23)00471-9 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Yu, Bo verfasserin aut Pyramid multi-loss vision transformer for thyroid cancer classification using cytological smear 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Multi-instance learning, a commonly used technique in artificial intelligence for analyzing slides, can be applied to diagnose thyroid cancer based on cytological smears. Since smears do not have multidimensional histological features similar to histopathology, mining potential contextual information and diversity of features is crucial for better classification performance. In this paper, we propose a pyramid multi-loss vision transformer model called PyMLViT, a novel algorithm with two core modules to address these issues. Specifically, we design a pyramid token extraction module to acquire potential contextual information on smears. The pyramid token structure extracts multi-scale local features, and the vision transformer structure further obtains global information through the self-attention mechanism. Furthermore, we construct multi-loss fusion module based on the conventional multi-instance learning framework. With carefully designed bag and patch weight allocation strategies, we incorporate slide-level annotations as pseudo-labels for patches to participate in training, thus enhancing the diversity of supervised information. Extensive experimental results on the real-world dataset show that PyMLViT has a high performance and a competitive number of parameters compared to popular methods for diagnosing thyroid cancer in cytological smears. Pyramid feature Cytological smear Vision transformer Multiple instance learning Thyroid cancer Yin, Peng verfasserin aut Chen, Hechang verfasserin (orcid)0000-0001-7835-9556 aut Wang, Yifei verfasserin aut Zhao, Yu verfasserin aut Cong, Xianling verfasserin aut Dijkstra, Jouke verfasserin (orcid)0000-0002-8666-3731 aut Cong, Lele verfasserin aut Enthalten in Knowledge-based systems Amsterdam [u.a.] : Elsevier Science, 1987 275 Online-Ressource (DE-627)320580024 (DE-600)2017495-0 (DE-576)253018722 0950-7051 nnns volume:275 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 275 |
allfieldsSound |
10.1016/j.knosys.2023.110721 doi (DE-627)ELV060272767 (ELSEVIER)S0950-7051(23)00471-9 DE-627 ger DE-627 rda eng 004 VZ 54.72 bkl Yu, Bo verfasserin aut Pyramid multi-loss vision transformer for thyroid cancer classification using cytological smear 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Multi-instance learning, a commonly used technique in artificial intelligence for analyzing slides, can be applied to diagnose thyroid cancer based on cytological smears. Since smears do not have multidimensional histological features similar to histopathology, mining potential contextual information and diversity of features is crucial for better classification performance. In this paper, we propose a pyramid multi-loss vision transformer model called PyMLViT, a novel algorithm with two core modules to address these issues. Specifically, we design a pyramid token extraction module to acquire potential contextual information on smears. The pyramid token structure extracts multi-scale local features, and the vision transformer structure further obtains global information through the self-attention mechanism. Furthermore, we construct multi-loss fusion module based on the conventional multi-instance learning framework. With carefully designed bag and patch weight allocation strategies, we incorporate slide-level annotations as pseudo-labels for patches to participate in training, thus enhancing the diversity of supervised information. Extensive experimental results on the real-world dataset show that PyMLViT has a high performance and a competitive number of parameters compared to popular methods for diagnosing thyroid cancer in cytological smears. Pyramid feature Cytological smear Vision transformer Multiple instance learning Thyroid cancer Yin, Peng verfasserin aut Chen, Hechang verfasserin (orcid)0000-0001-7835-9556 aut Wang, Yifei verfasserin aut Zhao, Yu verfasserin aut Cong, Xianling verfasserin aut Dijkstra, Jouke verfasserin (orcid)0000-0002-8666-3731 aut Cong, Lele verfasserin aut Enthalten in Knowledge-based systems Amsterdam [u.a.] : Elsevier Science, 1987 275 Online-Ressource (DE-627)320580024 (DE-600)2017495-0 (DE-576)253018722 0950-7051 nnns volume:275 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 275 |
language |
English |
source |
Enthalten in Knowledge-based systems 275 volume:275 |
sourceStr |
Enthalten in Knowledge-based systems 275 volume:275 |
format_phy_str_mv |
Article |
bklname |
Künstliche Intelligenz |
institution |
findex.gbv.de |
topic_facet |
Pyramid feature Cytological smear Vision transformer Multiple instance learning Thyroid cancer |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
Knowledge-based systems |
authorswithroles_txt_mv |
Yu, Bo @@aut@@ Yin, Peng @@aut@@ Chen, Hechang @@aut@@ Wang, Yifei @@aut@@ Zhao, Yu @@aut@@ Cong, Xianling @@aut@@ Dijkstra, Jouke @@aut@@ Cong, Lele @@aut@@ |
publishDateDaySort_date |
2023-01-01T00:00:00Z |
hierarchy_top_id |
320580024 |
dewey-sort |
14 |
id |
ELV060272767 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV060272767</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20231005073315.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230709s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.knosys.2023.110721</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV060272767</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0950-7051(23)00471-9</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.72</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Yu, Bo</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Pyramid multi-loss vision transformer for thyroid cancer classification using cytological smear</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Multi-instance learning, a commonly used technique in artificial intelligence for analyzing slides, can be applied to diagnose thyroid cancer based on cytological smears. Since smears do not have multidimensional histological features similar to histopathology, mining potential contextual information and diversity of features is crucial for better classification performance. In this paper, we propose a pyramid multi-loss vision transformer model called PyMLViT, a novel algorithm with two core modules to address these issues. Specifically, we design a pyramid token extraction module to acquire potential contextual information on smears. The pyramid token structure extracts multi-scale local features, and the vision transformer structure further obtains global information through the self-attention mechanism. Furthermore, we construct multi-loss fusion module based on the conventional multi-instance learning framework. With carefully designed bag and patch weight allocation strategies, we incorporate slide-level annotations as pseudo-labels for patches to participate in training, thus enhancing the diversity of supervised information. Extensive experimental results on the real-world dataset show that PyMLViT has a high performance and a competitive number of parameters compared to popular methods for diagnosing thyroid cancer in cytological smears.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Pyramid feature</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Cytological smear</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Vision transformer</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multiple instance learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Thyroid cancer</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Yin, Peng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Chen, Hechang</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0001-7835-9556</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wang, Yifei</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhao, Yu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Cong, Xianling</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Dijkstra, Jouke</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-8666-3731</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Cong, Lele</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Knowledge-based systems</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier Science, 1987</subfield><subfield code="g">275</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)320580024</subfield><subfield code="w">(DE-600)2017495-0</subfield><subfield code="w">(DE-576)253018722</subfield><subfield code="x">0950-7051</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:275</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.72</subfield><subfield code="j">Künstliche Intelligenz</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">275</subfield></datafield></record></collection>
|
author |
Yu, Bo |
spellingShingle |
Yu, Bo ddc 004 bkl 54.72 misc Pyramid feature misc Cytological smear misc Vision transformer misc Multiple instance learning misc Thyroid cancer Pyramid multi-loss vision transformer for thyroid cancer classification using cytological smear |
authorStr |
Yu, Bo |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)320580024 |
format |
electronic Article |
dewey-ones |
004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut aut aut aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
0950-7051 |
topic_title |
004 VZ 54.72 bkl Pyramid multi-loss vision transformer for thyroid cancer classification using cytological smear Pyramid feature Cytological smear Vision transformer Multiple instance learning Thyroid cancer |
topic |
ddc 004 bkl 54.72 misc Pyramid feature misc Cytological smear misc Vision transformer misc Multiple instance learning misc Thyroid cancer |
topic_unstemmed |
ddc 004 bkl 54.72 misc Pyramid feature misc Cytological smear misc Vision transformer misc Multiple instance learning misc Thyroid cancer |
topic_browse |
ddc 004 bkl 54.72 misc Pyramid feature misc Cytological smear misc Vision transformer misc Multiple instance learning misc Thyroid cancer |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Knowledge-based systems |
hierarchy_parent_id |
320580024 |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
Knowledge-based systems |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)320580024 (DE-600)2017495-0 (DE-576)253018722 |
title |
Pyramid multi-loss vision transformer for thyroid cancer classification using cytological smear |
ctrlnum |
(DE-627)ELV060272767 (ELSEVIER)S0950-7051(23)00471-9 |
title_full |
Pyramid multi-loss vision transformer for thyroid cancer classification using cytological smear |
author_sort |
Yu, Bo |
journal |
Knowledge-based systems |
journalStr |
Knowledge-based systems |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2023 |
contenttype_str_mv |
zzz |
author_browse |
Yu, Bo Yin, Peng Chen, Hechang Wang, Yifei Zhao, Yu Cong, Xianling Dijkstra, Jouke Cong, Lele |
container_volume |
275 |
class |
004 VZ 54.72 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Yu, Bo |
doi_str_mv |
10.1016/j.knosys.2023.110721 |
normlink |
(ORCID)0000-0001-7835-9556 (ORCID)0000-0002-8666-3731 |
normlink_prefix_str_mv |
(orcid)0000-0001-7835-9556 (orcid)0000-0002-8666-3731 |
dewey-full |
004 |
author2-role |
verfasserin |
title_sort |
pyramid multi-loss vision transformer for thyroid cancer classification using cytological smear |
title_auth |
Pyramid multi-loss vision transformer for thyroid cancer classification using cytological smear |
abstract |
Multi-instance learning, a commonly used technique in artificial intelligence for analyzing slides, can be applied to diagnose thyroid cancer based on cytological smears. Since smears do not have multidimensional histological features similar to histopathology, mining potential contextual information and diversity of features is crucial for better classification performance. In this paper, we propose a pyramid multi-loss vision transformer model called PyMLViT, a novel algorithm with two core modules to address these issues. Specifically, we design a pyramid token extraction module to acquire potential contextual information on smears. The pyramid token structure extracts multi-scale local features, and the vision transformer structure further obtains global information through the self-attention mechanism. Furthermore, we construct multi-loss fusion module based on the conventional multi-instance learning framework. With carefully designed bag and patch weight allocation strategies, we incorporate slide-level annotations as pseudo-labels for patches to participate in training, thus enhancing the diversity of supervised information. Extensive experimental results on the real-world dataset show that PyMLViT has a high performance and a competitive number of parameters compared to popular methods for diagnosing thyroid cancer in cytological smears. |
abstractGer |
Multi-instance learning, a commonly used technique in artificial intelligence for analyzing slides, can be applied to diagnose thyroid cancer based on cytological smears. Since smears do not have multidimensional histological features similar to histopathology, mining potential contextual information and diversity of features is crucial for better classification performance. In this paper, we propose a pyramid multi-loss vision transformer model called PyMLViT, a novel algorithm with two core modules to address these issues. Specifically, we design a pyramid token extraction module to acquire potential contextual information on smears. The pyramid token structure extracts multi-scale local features, and the vision transformer structure further obtains global information through the self-attention mechanism. Furthermore, we construct multi-loss fusion module based on the conventional multi-instance learning framework. With carefully designed bag and patch weight allocation strategies, we incorporate slide-level annotations as pseudo-labels for patches to participate in training, thus enhancing the diversity of supervised information. Extensive experimental results on the real-world dataset show that PyMLViT has a high performance and a competitive number of parameters compared to popular methods for diagnosing thyroid cancer in cytological smears. |
abstract_unstemmed |
Multi-instance learning, a commonly used technique in artificial intelligence for analyzing slides, can be applied to diagnose thyroid cancer based on cytological smears. Since smears do not have multidimensional histological features similar to histopathology, mining potential contextual information and diversity of features is crucial for better classification performance. In this paper, we propose a pyramid multi-loss vision transformer model called PyMLViT, a novel algorithm with two core modules to address these issues. Specifically, we design a pyramid token extraction module to acquire potential contextual information on smears. The pyramid token structure extracts multi-scale local features, and the vision transformer structure further obtains global information through the self-attention mechanism. Furthermore, we construct multi-loss fusion module based on the conventional multi-instance learning framework. With carefully designed bag and patch weight allocation strategies, we incorporate slide-level annotations as pseudo-labels for patches to participate in training, thus enhancing the diversity of supervised information. Extensive experimental results on the real-world dataset show that PyMLViT has a high performance and a competitive number of parameters compared to popular methods for diagnosing thyroid cancer in cytological smears. |
collection_details |
GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 |
title_short |
Pyramid multi-loss vision transformer for thyroid cancer classification using cytological smear |
remote_bool |
true |
author2 |
Yin, Peng Chen, Hechang Wang, Yifei Zhao, Yu Cong, Xianling Dijkstra, Jouke Cong, Lele |
author2Str |
Yin, Peng Chen, Hechang Wang, Yifei Zhao, Yu Cong, Xianling Dijkstra, Jouke Cong, Lele |
ppnlink |
320580024 |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1016/j.knosys.2023.110721 |
up_date |
2024-07-06T23:42:40.410Z |
_version_ |
1803875107903373312 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV060272767</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20231005073315.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230709s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.knosys.2023.110721</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV060272767</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0950-7051(23)00471-9</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.72</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Yu, Bo</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Pyramid multi-loss vision transformer for thyroid cancer classification using cytological smear</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Multi-instance learning, a commonly used technique in artificial intelligence for analyzing slides, can be applied to diagnose thyroid cancer based on cytological smears. Since smears do not have multidimensional histological features similar to histopathology, mining potential contextual information and diversity of features is crucial for better classification performance. In this paper, we propose a pyramid multi-loss vision transformer model called PyMLViT, a novel algorithm with two core modules to address these issues. Specifically, we design a pyramid token extraction module to acquire potential contextual information on smears. The pyramid token structure extracts multi-scale local features, and the vision transformer structure further obtains global information through the self-attention mechanism. Furthermore, we construct multi-loss fusion module based on the conventional multi-instance learning framework. With carefully designed bag and patch weight allocation strategies, we incorporate slide-level annotations as pseudo-labels for patches to participate in training, thus enhancing the diversity of supervised information. Extensive experimental results on the real-world dataset show that PyMLViT has a high performance and a competitive number of parameters compared to popular methods for diagnosing thyroid cancer in cytological smears.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Pyramid feature</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Cytological smear</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Vision transformer</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multiple instance learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Thyroid cancer</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Yin, Peng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Chen, Hechang</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0001-7835-9556</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wang, Yifei</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhao, Yu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Cong, Xianling</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Dijkstra, Jouke</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-8666-3731</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Cong, Lele</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Knowledge-based systems</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier Science, 1987</subfield><subfield code="g">275</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)320580024</subfield><subfield code="w">(DE-600)2017495-0</subfield><subfield code="w">(DE-576)253018722</subfield><subfield code="x">0950-7051</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:275</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.72</subfield><subfield code="j">Künstliche Intelligenz</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">275</subfield></datafield></record></collection>
|
score |
7.4007854 |