Collaborative learning mutual network for domain adaptation in person re-identification
Abstract In this paper, we propose a new Collaborative Learning Mutual Network (CLM-Net) for domain adaptation in person re-identification (re-id). Current state-of-the-art re-id models achieved good performances when trained on published datasets. However, these trained models work poorly on newly...
Ausführliche Beschreibung
Autor*in: |
Tay, Chiat-Pin [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022 |
---|
Schlagwörter: |
---|
Anmerkung: |
© The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022 |
---|
Übergeordnetes Werk: |
Enthalten in: Neural computing & applications - Springer London, 1993, 34(2022), 14 vom: 16. März, Seite 12211-12222 |
---|---|
Übergeordnetes Werk: |
volume:34 ; year:2022 ; number:14 ; day:16 ; month:03 ; pages:12211-12222 |
Links: |
---|
DOI / URN: |
10.1007/s00521-022-07108-5 |
---|
Katalog-ID: |
OLC2079157698 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | OLC2079157698 | ||
003 | DE-627 | ||
005 | 20230506035948.0 | ||
007 | tu | ||
008 | 221220s2022 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s00521-022-07108-5 |2 doi | |
035 | |a (DE-627)OLC2079157698 | ||
035 | |a (DE-He213)s00521-022-07108-5-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |q VZ |
100 | 1 | |a Tay, Chiat-Pin |e verfasserin |0 (orcid)0000-0002-4984-9780 |4 aut | |
245 | 1 | 0 | |a Collaborative learning mutual network for domain adaptation in person re-identification |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022 | ||
520 | |a Abstract In this paper, we propose a new Collaborative Learning Mutual Network (CLM-Net) for domain adaptation in person re-identification (re-id). Current state-of-the-art re-id models achieved good performances when trained on published datasets. However, these trained models work poorly on newly collected dataset for target domain. Our proposed CLM-Net aims to overcome this limitation by using body part and saliency map learning to improve the discriminative representations in a deep ensemble framework. Specifically, we integrate body part features learning tasks and a global saliency task to the baseline model so that information that helps to identify the pedestrian can be extracted. As a result, the trained representations produced better pseudo labels during the clustering process and provide smooth transition from the source to the target domains. We also propose to leverage the unlabeled data by using contrastive learning to further encode strong representations. Our proposed CLM-Net outperforms most of the current state-of-the-art, with 80.9%, 69.7%, 29.0% and 26.6% mAP accuracy on Duke-to-Market, Market-to-Duke, Duke-to-MSMT and Market-to-MSMT domain adaptation, respectively, using ResNet-50 backbone. | ||
650 | 4 | |a Collaborative and mutual learning | |
650 | 4 | |a Domain adaptation | |
650 | 4 | |a Person re-identification | |
650 | 4 | |a Contrastive learning | |
700 | 1 | |a Yap, Kim-Hui |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Neural computing & applications |d Springer London, 1993 |g 34(2022), 14 vom: 16. März, Seite 12211-12222 |w (DE-627)165669608 |w (DE-600)1136944-9 |w (DE-576)032873050 |x 0941-0643 |7 nnns |
773 | 1 | 8 | |g volume:34 |g year:2022 |g number:14 |g day:16 |g month:03 |g pages:12211-12222 |
856 | 4 | 1 | |u https://doi.org/10.1007/s00521-022-07108-5 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-MAT | ||
912 | |a GBV_ILN_2018 | ||
912 | |a GBV_ILN_4277 | ||
951 | |a AR | ||
952 | |d 34 |j 2022 |e 14 |b 16 |c 03 |h 12211-12222 |
author_variant |
c p t cpt k h y khy |
---|---|
matchkey_str |
article:09410643:2022----::olbrtvlannmtantokodmiaattoip |
hierarchy_sort_str |
2022 |
publishDate |
2022 |
allfields |
10.1007/s00521-022-07108-5 doi (DE-627)OLC2079157698 (DE-He213)s00521-022-07108-5-p DE-627 ger DE-627 rakwb eng 004 VZ Tay, Chiat-Pin verfasserin (orcid)0000-0002-4984-9780 aut Collaborative learning mutual network for domain adaptation in person re-identification 2022 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022 Abstract In this paper, we propose a new Collaborative Learning Mutual Network (CLM-Net) for domain adaptation in person re-identification (re-id). Current state-of-the-art re-id models achieved good performances when trained on published datasets. However, these trained models work poorly on newly collected dataset for target domain. Our proposed CLM-Net aims to overcome this limitation by using body part and saliency map learning to improve the discriminative representations in a deep ensemble framework. Specifically, we integrate body part features learning tasks and a global saliency task to the baseline model so that information that helps to identify the pedestrian can be extracted. As a result, the trained representations produced better pseudo labels during the clustering process and provide smooth transition from the source to the target domains. We also propose to leverage the unlabeled data by using contrastive learning to further encode strong representations. Our proposed CLM-Net outperforms most of the current state-of-the-art, with 80.9%, 69.7%, 29.0% and 26.6% mAP accuracy on Duke-to-Market, Market-to-Duke, Duke-to-MSMT and Market-to-MSMT domain adaptation, respectively, using ResNet-50 backbone. Collaborative and mutual learning Domain adaptation Person re-identification Contrastive learning Yap, Kim-Hui aut Enthalten in Neural computing & applications Springer London, 1993 34(2022), 14 vom: 16. März, Seite 12211-12222 (DE-627)165669608 (DE-600)1136944-9 (DE-576)032873050 0941-0643 nnns volume:34 year:2022 number:14 day:16 month:03 pages:12211-12222 https://doi.org/10.1007/s00521-022-07108-5 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_2018 GBV_ILN_4277 AR 34 2022 14 16 03 12211-12222 |
spelling |
10.1007/s00521-022-07108-5 doi (DE-627)OLC2079157698 (DE-He213)s00521-022-07108-5-p DE-627 ger DE-627 rakwb eng 004 VZ Tay, Chiat-Pin verfasserin (orcid)0000-0002-4984-9780 aut Collaborative learning mutual network for domain adaptation in person re-identification 2022 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022 Abstract In this paper, we propose a new Collaborative Learning Mutual Network (CLM-Net) for domain adaptation in person re-identification (re-id). Current state-of-the-art re-id models achieved good performances when trained on published datasets. However, these trained models work poorly on newly collected dataset for target domain. Our proposed CLM-Net aims to overcome this limitation by using body part and saliency map learning to improve the discriminative representations in a deep ensemble framework. Specifically, we integrate body part features learning tasks and a global saliency task to the baseline model so that information that helps to identify the pedestrian can be extracted. As a result, the trained representations produced better pseudo labels during the clustering process and provide smooth transition from the source to the target domains. We also propose to leverage the unlabeled data by using contrastive learning to further encode strong representations. Our proposed CLM-Net outperforms most of the current state-of-the-art, with 80.9%, 69.7%, 29.0% and 26.6% mAP accuracy on Duke-to-Market, Market-to-Duke, Duke-to-MSMT and Market-to-MSMT domain adaptation, respectively, using ResNet-50 backbone. Collaborative and mutual learning Domain adaptation Person re-identification Contrastive learning Yap, Kim-Hui aut Enthalten in Neural computing & applications Springer London, 1993 34(2022), 14 vom: 16. März, Seite 12211-12222 (DE-627)165669608 (DE-600)1136944-9 (DE-576)032873050 0941-0643 nnns volume:34 year:2022 number:14 day:16 month:03 pages:12211-12222 https://doi.org/10.1007/s00521-022-07108-5 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_2018 GBV_ILN_4277 AR 34 2022 14 16 03 12211-12222 |
allfields_unstemmed |
10.1007/s00521-022-07108-5 doi (DE-627)OLC2079157698 (DE-He213)s00521-022-07108-5-p DE-627 ger DE-627 rakwb eng 004 VZ Tay, Chiat-Pin verfasserin (orcid)0000-0002-4984-9780 aut Collaborative learning mutual network for domain adaptation in person re-identification 2022 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022 Abstract In this paper, we propose a new Collaborative Learning Mutual Network (CLM-Net) for domain adaptation in person re-identification (re-id). Current state-of-the-art re-id models achieved good performances when trained on published datasets. However, these trained models work poorly on newly collected dataset for target domain. Our proposed CLM-Net aims to overcome this limitation by using body part and saliency map learning to improve the discriminative representations in a deep ensemble framework. Specifically, we integrate body part features learning tasks and a global saliency task to the baseline model so that information that helps to identify the pedestrian can be extracted. As a result, the trained representations produced better pseudo labels during the clustering process and provide smooth transition from the source to the target domains. We also propose to leverage the unlabeled data by using contrastive learning to further encode strong representations. Our proposed CLM-Net outperforms most of the current state-of-the-art, with 80.9%, 69.7%, 29.0% and 26.6% mAP accuracy on Duke-to-Market, Market-to-Duke, Duke-to-MSMT and Market-to-MSMT domain adaptation, respectively, using ResNet-50 backbone. Collaborative and mutual learning Domain adaptation Person re-identification Contrastive learning Yap, Kim-Hui aut Enthalten in Neural computing & applications Springer London, 1993 34(2022), 14 vom: 16. März, Seite 12211-12222 (DE-627)165669608 (DE-600)1136944-9 (DE-576)032873050 0941-0643 nnns volume:34 year:2022 number:14 day:16 month:03 pages:12211-12222 https://doi.org/10.1007/s00521-022-07108-5 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_2018 GBV_ILN_4277 AR 34 2022 14 16 03 12211-12222 |
allfieldsGer |
10.1007/s00521-022-07108-5 doi (DE-627)OLC2079157698 (DE-He213)s00521-022-07108-5-p DE-627 ger DE-627 rakwb eng 004 VZ Tay, Chiat-Pin verfasserin (orcid)0000-0002-4984-9780 aut Collaborative learning mutual network for domain adaptation in person re-identification 2022 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022 Abstract In this paper, we propose a new Collaborative Learning Mutual Network (CLM-Net) for domain adaptation in person re-identification (re-id). Current state-of-the-art re-id models achieved good performances when trained on published datasets. However, these trained models work poorly on newly collected dataset for target domain. Our proposed CLM-Net aims to overcome this limitation by using body part and saliency map learning to improve the discriminative representations in a deep ensemble framework. Specifically, we integrate body part features learning tasks and a global saliency task to the baseline model so that information that helps to identify the pedestrian can be extracted. As a result, the trained representations produced better pseudo labels during the clustering process and provide smooth transition from the source to the target domains. We also propose to leverage the unlabeled data by using contrastive learning to further encode strong representations. Our proposed CLM-Net outperforms most of the current state-of-the-art, with 80.9%, 69.7%, 29.0% and 26.6% mAP accuracy on Duke-to-Market, Market-to-Duke, Duke-to-MSMT and Market-to-MSMT domain adaptation, respectively, using ResNet-50 backbone. Collaborative and mutual learning Domain adaptation Person re-identification Contrastive learning Yap, Kim-Hui aut Enthalten in Neural computing & applications Springer London, 1993 34(2022), 14 vom: 16. März, Seite 12211-12222 (DE-627)165669608 (DE-600)1136944-9 (DE-576)032873050 0941-0643 nnns volume:34 year:2022 number:14 day:16 month:03 pages:12211-12222 https://doi.org/10.1007/s00521-022-07108-5 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_2018 GBV_ILN_4277 AR 34 2022 14 16 03 12211-12222 |
allfieldsSound |
10.1007/s00521-022-07108-5 doi (DE-627)OLC2079157698 (DE-He213)s00521-022-07108-5-p DE-627 ger DE-627 rakwb eng 004 VZ Tay, Chiat-Pin verfasserin (orcid)0000-0002-4984-9780 aut Collaborative learning mutual network for domain adaptation in person re-identification 2022 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022 Abstract In this paper, we propose a new Collaborative Learning Mutual Network (CLM-Net) for domain adaptation in person re-identification (re-id). Current state-of-the-art re-id models achieved good performances when trained on published datasets. However, these trained models work poorly on newly collected dataset for target domain. Our proposed CLM-Net aims to overcome this limitation by using body part and saliency map learning to improve the discriminative representations in a deep ensemble framework. Specifically, we integrate body part features learning tasks and a global saliency task to the baseline model so that information that helps to identify the pedestrian can be extracted. As a result, the trained representations produced better pseudo labels during the clustering process and provide smooth transition from the source to the target domains. We also propose to leverage the unlabeled data by using contrastive learning to further encode strong representations. Our proposed CLM-Net outperforms most of the current state-of-the-art, with 80.9%, 69.7%, 29.0% and 26.6% mAP accuracy on Duke-to-Market, Market-to-Duke, Duke-to-MSMT and Market-to-MSMT domain adaptation, respectively, using ResNet-50 backbone. Collaborative and mutual learning Domain adaptation Person re-identification Contrastive learning Yap, Kim-Hui aut Enthalten in Neural computing & applications Springer London, 1993 34(2022), 14 vom: 16. März, Seite 12211-12222 (DE-627)165669608 (DE-600)1136944-9 (DE-576)032873050 0941-0643 nnns volume:34 year:2022 number:14 day:16 month:03 pages:12211-12222 https://doi.org/10.1007/s00521-022-07108-5 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_2018 GBV_ILN_4277 AR 34 2022 14 16 03 12211-12222 |
language |
English |
source |
Enthalten in Neural computing & applications 34(2022), 14 vom: 16. März, Seite 12211-12222 volume:34 year:2022 number:14 day:16 month:03 pages:12211-12222 |
sourceStr |
Enthalten in Neural computing & applications 34(2022), 14 vom: 16. März, Seite 12211-12222 volume:34 year:2022 number:14 day:16 month:03 pages:12211-12222 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Collaborative and mutual learning Domain adaptation Person re-identification Contrastive learning |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
Neural computing & applications |
authorswithroles_txt_mv |
Tay, Chiat-Pin @@aut@@ Yap, Kim-Hui @@aut@@ |
publishDateDaySort_date |
2022-03-16T00:00:00Z |
hierarchy_top_id |
165669608 |
dewey-sort |
14 |
id |
OLC2079157698 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2079157698</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230506035948.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">221220s2022 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s00521-022-07108-5</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2079157698</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s00521-022-07108-5-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Tay, Chiat-Pin</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-4984-9780</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Collaborative learning mutual network for domain adaptation in person re-identification</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract In this paper, we propose a new Collaborative Learning Mutual Network (CLM-Net) for domain adaptation in person re-identification (re-id). Current state-of-the-art re-id models achieved good performances when trained on published datasets. However, these trained models work poorly on newly collected dataset for target domain. Our proposed CLM-Net aims to overcome this limitation by using body part and saliency map learning to improve the discriminative representations in a deep ensemble framework. Specifically, we integrate body part features learning tasks and a global saliency task to the baseline model so that information that helps to identify the pedestrian can be extracted. As a result, the trained representations produced better pseudo labels during the clustering process and provide smooth transition from the source to the target domains. We also propose to leverage the unlabeled data by using contrastive learning to further encode strong representations. Our proposed CLM-Net outperforms most of the current state-of-the-art, with 80.9%, 69.7%, 29.0% and 26.6% mAP accuracy on Duke-to-Market, Market-to-Duke, Duke-to-MSMT and Market-to-MSMT domain adaptation, respectively, using ResNet-50 backbone.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Collaborative and mutual learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Domain adaptation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Person re-identification</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Contrastive learning</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Yap, Kim-Hui</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Neural computing & applications</subfield><subfield code="d">Springer London, 1993</subfield><subfield code="g">34(2022), 14 vom: 16. März, Seite 12211-12222</subfield><subfield code="w">(DE-627)165669608</subfield><subfield code="w">(DE-600)1136944-9</subfield><subfield code="w">(DE-576)032873050</subfield><subfield code="x">0941-0643</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:34</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:14</subfield><subfield code="g">day:16</subfield><subfield code="g">month:03</subfield><subfield code="g">pages:12211-12222</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s00521-022-07108-5</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2018</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4277</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">34</subfield><subfield code="j">2022</subfield><subfield code="e">14</subfield><subfield code="b">16</subfield><subfield code="c">03</subfield><subfield code="h">12211-12222</subfield></datafield></record></collection>
|
author |
Tay, Chiat-Pin |
spellingShingle |
Tay, Chiat-Pin ddc 004 misc Collaborative and mutual learning misc Domain adaptation misc Person re-identification misc Contrastive learning Collaborative learning mutual network for domain adaptation in person re-identification |
authorStr |
Tay, Chiat-Pin |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)165669608 |
format |
Article |
dewey-ones |
004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
0941-0643 |
topic_title |
004 VZ Collaborative learning mutual network for domain adaptation in person re-identification Collaborative and mutual learning Domain adaptation Person re-identification Contrastive learning |
topic |
ddc 004 misc Collaborative and mutual learning misc Domain adaptation misc Person re-identification misc Contrastive learning |
topic_unstemmed |
ddc 004 misc Collaborative and mutual learning misc Domain adaptation misc Person re-identification misc Contrastive learning |
topic_browse |
ddc 004 misc Collaborative and mutual learning misc Domain adaptation misc Person re-identification misc Contrastive learning |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
Neural computing & applications |
hierarchy_parent_id |
165669608 |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
Neural computing & applications |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)165669608 (DE-600)1136944-9 (DE-576)032873050 |
title |
Collaborative learning mutual network for domain adaptation in person re-identification |
ctrlnum |
(DE-627)OLC2079157698 (DE-He213)s00521-022-07108-5-p |
title_full |
Collaborative learning mutual network for domain adaptation in person re-identification |
author_sort |
Tay, Chiat-Pin |
journal |
Neural computing & applications |
journalStr |
Neural computing & applications |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
txt |
container_start_page |
12211 |
author_browse |
Tay, Chiat-Pin Yap, Kim-Hui |
container_volume |
34 |
class |
004 VZ |
format_se |
Aufsätze |
author-letter |
Tay, Chiat-Pin |
doi_str_mv |
10.1007/s00521-022-07108-5 |
normlink |
(ORCID)0000-0002-4984-9780 |
normlink_prefix_str_mv |
(orcid)0000-0002-4984-9780 |
dewey-full |
004 |
title_sort |
collaborative learning mutual network for domain adaptation in person re-identification |
title_auth |
Collaborative learning mutual network for domain adaptation in person re-identification |
abstract |
Abstract In this paper, we propose a new Collaborative Learning Mutual Network (CLM-Net) for domain adaptation in person re-identification (re-id). Current state-of-the-art re-id models achieved good performances when trained on published datasets. However, these trained models work poorly on newly collected dataset for target domain. Our proposed CLM-Net aims to overcome this limitation by using body part and saliency map learning to improve the discriminative representations in a deep ensemble framework. Specifically, we integrate body part features learning tasks and a global saliency task to the baseline model so that information that helps to identify the pedestrian can be extracted. As a result, the trained representations produced better pseudo labels during the clustering process and provide smooth transition from the source to the target domains. We also propose to leverage the unlabeled data by using contrastive learning to further encode strong representations. Our proposed CLM-Net outperforms most of the current state-of-the-art, with 80.9%, 69.7%, 29.0% and 26.6% mAP accuracy on Duke-to-Market, Market-to-Duke, Duke-to-MSMT and Market-to-MSMT domain adaptation, respectively, using ResNet-50 backbone. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022 |
abstractGer |
Abstract In this paper, we propose a new Collaborative Learning Mutual Network (CLM-Net) for domain adaptation in person re-identification (re-id). Current state-of-the-art re-id models achieved good performances when trained on published datasets. However, these trained models work poorly on newly collected dataset for target domain. Our proposed CLM-Net aims to overcome this limitation by using body part and saliency map learning to improve the discriminative representations in a deep ensemble framework. Specifically, we integrate body part features learning tasks and a global saliency task to the baseline model so that information that helps to identify the pedestrian can be extracted. As a result, the trained representations produced better pseudo labels during the clustering process and provide smooth transition from the source to the target domains. We also propose to leverage the unlabeled data by using contrastive learning to further encode strong representations. Our proposed CLM-Net outperforms most of the current state-of-the-art, with 80.9%, 69.7%, 29.0% and 26.6% mAP accuracy on Duke-to-Market, Market-to-Duke, Duke-to-MSMT and Market-to-MSMT domain adaptation, respectively, using ResNet-50 backbone. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022 |
abstract_unstemmed |
Abstract In this paper, we propose a new Collaborative Learning Mutual Network (CLM-Net) for domain adaptation in person re-identification (re-id). Current state-of-the-art re-id models achieved good performances when trained on published datasets. However, these trained models work poorly on newly collected dataset for target domain. Our proposed CLM-Net aims to overcome this limitation by using body part and saliency map learning to improve the discriminative representations in a deep ensemble framework. Specifically, we integrate body part features learning tasks and a global saliency task to the baseline model so that information that helps to identify the pedestrian can be extracted. As a result, the trained representations produced better pseudo labels during the clustering process and provide smooth transition from the source to the target domains. We also propose to leverage the unlabeled data by using contrastive learning to further encode strong representations. Our proposed CLM-Net outperforms most of the current state-of-the-art, with 80.9%, 69.7%, 29.0% and 26.6% mAP accuracy on Duke-to-Market, Market-to-Duke, Duke-to-MSMT and Market-to-MSMT domain adaptation, respectively, using ResNet-50 backbone. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_2018 GBV_ILN_4277 |
container_issue |
14 |
title_short |
Collaborative learning mutual network for domain adaptation in person re-identification |
url |
https://doi.org/10.1007/s00521-022-07108-5 |
remote_bool |
false |
author2 |
Yap, Kim-Hui |
author2Str |
Yap, Kim-Hui |
ppnlink |
165669608 |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s00521-022-07108-5 |
up_date |
2024-07-03T23:45:29.066Z |
_version_ |
1803603493852807168 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2079157698</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230506035948.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">221220s2022 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s00521-022-07108-5</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2079157698</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s00521-022-07108-5-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Tay, Chiat-Pin</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-4984-9780</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Collaborative learning mutual network for domain adaptation in person re-identification</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract In this paper, we propose a new Collaborative Learning Mutual Network (CLM-Net) for domain adaptation in person re-identification (re-id). Current state-of-the-art re-id models achieved good performances when trained on published datasets. However, these trained models work poorly on newly collected dataset for target domain. Our proposed CLM-Net aims to overcome this limitation by using body part and saliency map learning to improve the discriminative representations in a deep ensemble framework. Specifically, we integrate body part features learning tasks and a global saliency task to the baseline model so that information that helps to identify the pedestrian can be extracted. As a result, the trained representations produced better pseudo labels during the clustering process and provide smooth transition from the source to the target domains. We also propose to leverage the unlabeled data by using contrastive learning to further encode strong representations. Our proposed CLM-Net outperforms most of the current state-of-the-art, with 80.9%, 69.7%, 29.0% and 26.6% mAP accuracy on Duke-to-Market, Market-to-Duke, Duke-to-MSMT and Market-to-MSMT domain adaptation, respectively, using ResNet-50 backbone.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Collaborative and mutual learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Domain adaptation</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Person re-identification</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Contrastive learning</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Yap, Kim-Hui</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Neural computing & applications</subfield><subfield code="d">Springer London, 1993</subfield><subfield code="g">34(2022), 14 vom: 16. März, Seite 12211-12222</subfield><subfield code="w">(DE-627)165669608</subfield><subfield code="w">(DE-600)1136944-9</subfield><subfield code="w">(DE-576)032873050</subfield><subfield code="x">0941-0643</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:34</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:14</subfield><subfield code="g">day:16</subfield><subfield code="g">month:03</subfield><subfield code="g">pages:12211-12222</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s00521-022-07108-5</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2018</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4277</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">34</subfield><subfield code="j">2022</subfield><subfield code="e">14</subfield><subfield code="b">16</subfield><subfield code="c">03</subfield><subfield code="h">12211-12222</subfield></datafield></record></collection>
|
score |
7.399147 |