Revisiting data augmentation for subspace clustering
Subspace clustering is the classical problem of clustering a collection of data samples that approximately lie around several low-dimensional subspaces. The current state-of-the-art approaches for this problem are based on the self-expressive model which represents the samples as linear combination...
Ausführliche Beschreibung
Autor*in: |
Abdolali, Maryam [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022transfer abstract |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
Enthalten in: Subsurface fluid flow at an active cold seep area in the Qiongdongnan Basin, northern South China Sea - Wang, Jiliang ELSEVIER, 2018, Amsterdam [u.a.] |
---|---|
Übergeordnetes Werk: |
volume:258 ; year:2022 ; day:22 ; month:12 ; pages:0 |
Links: |
---|
DOI / URN: |
10.1016/j.knosys.2022.109974 |
---|
Katalog-ID: |
ELV059493313 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV059493313 | ||
003 | DE-627 | ||
005 | 20230626053013.0 | ||
007 | cr uuu---uuuuu | ||
008 | 221219s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.knosys.2022.109974 |2 doi | |
028 | 5 | 2 | |a /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001972.pica |
035 | |a (DE-627)ELV059493313 | ||
035 | |a (ELSEVIER)S0950-7051(22)01067-X | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 550 |q VZ |
084 | |a 38.00 |2 bkl | ||
100 | 1 | |a Abdolali, Maryam |e verfasserin |4 aut | |
245 | 1 | 0 | |a Revisiting data augmentation for subspace clustering |
264 | 1 | |c 2022transfer abstract | |
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a nicht spezifiziert |b z |2 rdamedia | ||
338 | |a nicht spezifiziert |b zu |2 rdacarrier | ||
520 | |a Subspace clustering is the classical problem of clustering a collection of data samples that approximately lie around several low-dimensional subspaces. The current state-of-the-art approaches for this problem are based on the self-expressive model which represents the samples as linear combination of other samples. However, these approaches require sufficiently well-spread samples for accurate representation which might not be necessarily accessible in many applications. In this paper, we shed light on this commonly neglected issue and argue that data distribution within each subspace plays a critical role in the success of self-expressive models. Our proposed solution to tackle this issue is motivated by the central role of data augmentation in the generalization power of deep neural networks. We propose two subspace clustering frameworks for both unsupervised and semi-supervised settings that use augmented samples as an enlarged dictionary to improve the quality of the self-expressive representation. We present an automatic augmentation strategy using a few labeled samples for the semi-supervised problem relying on the fact that the data samples lie in the union of multiple linear subspaces. Experimental results confirm the effectiveness of data augmentation, as it significantly improves the performance of general self-expressive model. | ||
520 | |a Subspace clustering is the classical problem of clustering a collection of data samples that approximately lie around several low-dimensional subspaces. The current state-of-the-art approaches for this problem are based on the self-expressive model which represents the samples as linear combination of other samples. However, these approaches require sufficiently well-spread samples for accurate representation which might not be necessarily accessible in many applications. In this paper, we shed light on this commonly neglected issue and argue that data distribution within each subspace plays a critical role in the success of self-expressive models. Our proposed solution to tackle this issue is motivated by the central role of data augmentation in the generalization power of deep neural networks. We propose two subspace clustering frameworks for both unsupervised and semi-supervised settings that use augmented samples as an enlarged dictionary to improve the quality of the self-expressive representation. We present an automatic augmentation strategy using a few labeled samples for the semi-supervised problem relying on the fact that the data samples lie in the union of multiple linear subspaces. Experimental results confirm the effectiveness of data augmentation, as it significantly improves the performance of general self-expressive model. | ||
650 | 7 | |a Auto-augmentation |2 Elsevier | |
650 | 7 | |a Subspace clustering |2 Elsevier | |
650 | 7 | |a Data augmentation |2 Elsevier | |
650 | 7 | |a Sparse representation |2 Elsevier | |
700 | 1 | |a Gillis, Nicolas |4 oth | |
773 | 0 | 8 | |i Enthalten in |n Elsevier Science |a Wang, Jiliang ELSEVIER |t Subsurface fluid flow at an active cold seep area in the Qiongdongnan Basin, northern South China Sea |d 2018 |g Amsterdam [u.a.] |w (DE-627)ELV001104926 |
773 | 1 | 8 | |g volume:258 |g year:2022 |g day:22 |g month:12 |g pages:0 |
856 | 4 | 0 | |u https://doi.org/10.1016/j.knosys.2022.109974 |3 Volltext |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SYSFLAG_U | ||
912 | |a SSG-OPC-GGO | ||
936 | b | k | |a 38.00 |j Geowissenschaften: Allgemeines |q VZ |
951 | |a AR | ||
952 | |d 258 |j 2022 |b 22 |c 1222 |h 0 |
author_variant |
m a ma |
---|---|
matchkey_str |
abdolalimaryamgillisnicolas:2022----:eiiigaaumnainosb |
hierarchy_sort_str |
2022transfer abstract |
bklnumber |
38.00 |
publishDate |
2022 |
allfields |
10.1016/j.knosys.2022.109974 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001972.pica (DE-627)ELV059493313 (ELSEVIER)S0950-7051(22)01067-X DE-627 ger DE-627 rakwb eng 550 VZ 38.00 bkl Abdolali, Maryam verfasserin aut Revisiting data augmentation for subspace clustering 2022transfer abstract nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Subspace clustering is the classical problem of clustering a collection of data samples that approximately lie around several low-dimensional subspaces. The current state-of-the-art approaches for this problem are based on the self-expressive model which represents the samples as linear combination of other samples. However, these approaches require sufficiently well-spread samples for accurate representation which might not be necessarily accessible in many applications. In this paper, we shed light on this commonly neglected issue and argue that data distribution within each subspace plays a critical role in the success of self-expressive models. Our proposed solution to tackle this issue is motivated by the central role of data augmentation in the generalization power of deep neural networks. We propose two subspace clustering frameworks for both unsupervised and semi-supervised settings that use augmented samples as an enlarged dictionary to improve the quality of the self-expressive representation. We present an automatic augmentation strategy using a few labeled samples for the semi-supervised problem relying on the fact that the data samples lie in the union of multiple linear subspaces. Experimental results confirm the effectiveness of data augmentation, as it significantly improves the performance of general self-expressive model. Subspace clustering is the classical problem of clustering a collection of data samples that approximately lie around several low-dimensional subspaces. The current state-of-the-art approaches for this problem are based on the self-expressive model which represents the samples as linear combination of other samples. However, these approaches require sufficiently well-spread samples for accurate representation which might not be necessarily accessible in many applications. In this paper, we shed light on this commonly neglected issue and argue that data distribution within each subspace plays a critical role in the success of self-expressive models. Our proposed solution to tackle this issue is motivated by the central role of data augmentation in the generalization power of deep neural networks. We propose two subspace clustering frameworks for both unsupervised and semi-supervised settings that use augmented samples as an enlarged dictionary to improve the quality of the self-expressive representation. We present an automatic augmentation strategy using a few labeled samples for the semi-supervised problem relying on the fact that the data samples lie in the union of multiple linear subspaces. Experimental results confirm the effectiveness of data augmentation, as it significantly improves the performance of general self-expressive model. Auto-augmentation Elsevier Subspace clustering Elsevier Data augmentation Elsevier Sparse representation Elsevier Gillis, Nicolas oth Enthalten in Elsevier Science Wang, Jiliang ELSEVIER Subsurface fluid flow at an active cold seep area in the Qiongdongnan Basin, northern South China Sea 2018 Amsterdam [u.a.] (DE-627)ELV001104926 volume:258 year:2022 day:22 month:12 pages:0 https://doi.org/10.1016/j.knosys.2022.109974 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OPC-GGO 38.00 Geowissenschaften: Allgemeines VZ AR 258 2022 22 1222 0 |
spelling |
10.1016/j.knosys.2022.109974 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001972.pica (DE-627)ELV059493313 (ELSEVIER)S0950-7051(22)01067-X DE-627 ger DE-627 rakwb eng 550 VZ 38.00 bkl Abdolali, Maryam verfasserin aut Revisiting data augmentation for subspace clustering 2022transfer abstract nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Subspace clustering is the classical problem of clustering a collection of data samples that approximately lie around several low-dimensional subspaces. The current state-of-the-art approaches for this problem are based on the self-expressive model which represents the samples as linear combination of other samples. However, these approaches require sufficiently well-spread samples for accurate representation which might not be necessarily accessible in many applications. In this paper, we shed light on this commonly neglected issue and argue that data distribution within each subspace plays a critical role in the success of self-expressive models. Our proposed solution to tackle this issue is motivated by the central role of data augmentation in the generalization power of deep neural networks. We propose two subspace clustering frameworks for both unsupervised and semi-supervised settings that use augmented samples as an enlarged dictionary to improve the quality of the self-expressive representation. We present an automatic augmentation strategy using a few labeled samples for the semi-supervised problem relying on the fact that the data samples lie in the union of multiple linear subspaces. Experimental results confirm the effectiveness of data augmentation, as it significantly improves the performance of general self-expressive model. Subspace clustering is the classical problem of clustering a collection of data samples that approximately lie around several low-dimensional subspaces. The current state-of-the-art approaches for this problem are based on the self-expressive model which represents the samples as linear combination of other samples. However, these approaches require sufficiently well-spread samples for accurate representation which might not be necessarily accessible in many applications. In this paper, we shed light on this commonly neglected issue and argue that data distribution within each subspace plays a critical role in the success of self-expressive models. Our proposed solution to tackle this issue is motivated by the central role of data augmentation in the generalization power of deep neural networks. We propose two subspace clustering frameworks for both unsupervised and semi-supervised settings that use augmented samples as an enlarged dictionary to improve the quality of the self-expressive representation. We present an automatic augmentation strategy using a few labeled samples for the semi-supervised problem relying on the fact that the data samples lie in the union of multiple linear subspaces. Experimental results confirm the effectiveness of data augmentation, as it significantly improves the performance of general self-expressive model. Auto-augmentation Elsevier Subspace clustering Elsevier Data augmentation Elsevier Sparse representation Elsevier Gillis, Nicolas oth Enthalten in Elsevier Science Wang, Jiliang ELSEVIER Subsurface fluid flow at an active cold seep area in the Qiongdongnan Basin, northern South China Sea 2018 Amsterdam [u.a.] (DE-627)ELV001104926 volume:258 year:2022 day:22 month:12 pages:0 https://doi.org/10.1016/j.knosys.2022.109974 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OPC-GGO 38.00 Geowissenschaften: Allgemeines VZ AR 258 2022 22 1222 0 |
allfields_unstemmed |
10.1016/j.knosys.2022.109974 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001972.pica (DE-627)ELV059493313 (ELSEVIER)S0950-7051(22)01067-X DE-627 ger DE-627 rakwb eng 550 VZ 38.00 bkl Abdolali, Maryam verfasserin aut Revisiting data augmentation for subspace clustering 2022transfer abstract nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Subspace clustering is the classical problem of clustering a collection of data samples that approximately lie around several low-dimensional subspaces. The current state-of-the-art approaches for this problem are based on the self-expressive model which represents the samples as linear combination of other samples. However, these approaches require sufficiently well-spread samples for accurate representation which might not be necessarily accessible in many applications. In this paper, we shed light on this commonly neglected issue and argue that data distribution within each subspace plays a critical role in the success of self-expressive models. Our proposed solution to tackle this issue is motivated by the central role of data augmentation in the generalization power of deep neural networks. We propose two subspace clustering frameworks for both unsupervised and semi-supervised settings that use augmented samples as an enlarged dictionary to improve the quality of the self-expressive representation. We present an automatic augmentation strategy using a few labeled samples for the semi-supervised problem relying on the fact that the data samples lie in the union of multiple linear subspaces. Experimental results confirm the effectiveness of data augmentation, as it significantly improves the performance of general self-expressive model. Subspace clustering is the classical problem of clustering a collection of data samples that approximately lie around several low-dimensional subspaces. The current state-of-the-art approaches for this problem are based on the self-expressive model which represents the samples as linear combination of other samples. However, these approaches require sufficiently well-spread samples for accurate representation which might not be necessarily accessible in many applications. In this paper, we shed light on this commonly neglected issue and argue that data distribution within each subspace plays a critical role in the success of self-expressive models. Our proposed solution to tackle this issue is motivated by the central role of data augmentation in the generalization power of deep neural networks. We propose two subspace clustering frameworks for both unsupervised and semi-supervised settings that use augmented samples as an enlarged dictionary to improve the quality of the self-expressive representation. We present an automatic augmentation strategy using a few labeled samples for the semi-supervised problem relying on the fact that the data samples lie in the union of multiple linear subspaces. Experimental results confirm the effectiveness of data augmentation, as it significantly improves the performance of general self-expressive model. Auto-augmentation Elsevier Subspace clustering Elsevier Data augmentation Elsevier Sparse representation Elsevier Gillis, Nicolas oth Enthalten in Elsevier Science Wang, Jiliang ELSEVIER Subsurface fluid flow at an active cold seep area in the Qiongdongnan Basin, northern South China Sea 2018 Amsterdam [u.a.] (DE-627)ELV001104926 volume:258 year:2022 day:22 month:12 pages:0 https://doi.org/10.1016/j.knosys.2022.109974 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OPC-GGO 38.00 Geowissenschaften: Allgemeines VZ AR 258 2022 22 1222 0 |
allfieldsGer |
10.1016/j.knosys.2022.109974 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001972.pica (DE-627)ELV059493313 (ELSEVIER)S0950-7051(22)01067-X DE-627 ger DE-627 rakwb eng 550 VZ 38.00 bkl Abdolali, Maryam verfasserin aut Revisiting data augmentation for subspace clustering 2022transfer abstract nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Subspace clustering is the classical problem of clustering a collection of data samples that approximately lie around several low-dimensional subspaces. The current state-of-the-art approaches for this problem are based on the self-expressive model which represents the samples as linear combination of other samples. However, these approaches require sufficiently well-spread samples for accurate representation which might not be necessarily accessible in many applications. In this paper, we shed light on this commonly neglected issue and argue that data distribution within each subspace plays a critical role in the success of self-expressive models. Our proposed solution to tackle this issue is motivated by the central role of data augmentation in the generalization power of deep neural networks. We propose two subspace clustering frameworks for both unsupervised and semi-supervised settings that use augmented samples as an enlarged dictionary to improve the quality of the self-expressive representation. We present an automatic augmentation strategy using a few labeled samples for the semi-supervised problem relying on the fact that the data samples lie in the union of multiple linear subspaces. Experimental results confirm the effectiveness of data augmentation, as it significantly improves the performance of general self-expressive model. Subspace clustering is the classical problem of clustering a collection of data samples that approximately lie around several low-dimensional subspaces. The current state-of-the-art approaches for this problem are based on the self-expressive model which represents the samples as linear combination of other samples. However, these approaches require sufficiently well-spread samples for accurate representation which might not be necessarily accessible in many applications. In this paper, we shed light on this commonly neglected issue and argue that data distribution within each subspace plays a critical role in the success of self-expressive models. Our proposed solution to tackle this issue is motivated by the central role of data augmentation in the generalization power of deep neural networks. We propose two subspace clustering frameworks for both unsupervised and semi-supervised settings that use augmented samples as an enlarged dictionary to improve the quality of the self-expressive representation. We present an automatic augmentation strategy using a few labeled samples for the semi-supervised problem relying on the fact that the data samples lie in the union of multiple linear subspaces. Experimental results confirm the effectiveness of data augmentation, as it significantly improves the performance of general self-expressive model. Auto-augmentation Elsevier Subspace clustering Elsevier Data augmentation Elsevier Sparse representation Elsevier Gillis, Nicolas oth Enthalten in Elsevier Science Wang, Jiliang ELSEVIER Subsurface fluid flow at an active cold seep area in the Qiongdongnan Basin, northern South China Sea 2018 Amsterdam [u.a.] (DE-627)ELV001104926 volume:258 year:2022 day:22 month:12 pages:0 https://doi.org/10.1016/j.knosys.2022.109974 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OPC-GGO 38.00 Geowissenschaften: Allgemeines VZ AR 258 2022 22 1222 0 |
allfieldsSound |
10.1016/j.knosys.2022.109974 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001972.pica (DE-627)ELV059493313 (ELSEVIER)S0950-7051(22)01067-X DE-627 ger DE-627 rakwb eng 550 VZ 38.00 bkl Abdolali, Maryam verfasserin aut Revisiting data augmentation for subspace clustering 2022transfer abstract nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Subspace clustering is the classical problem of clustering a collection of data samples that approximately lie around several low-dimensional subspaces. The current state-of-the-art approaches for this problem are based on the self-expressive model which represents the samples as linear combination of other samples. However, these approaches require sufficiently well-spread samples for accurate representation which might not be necessarily accessible in many applications. In this paper, we shed light on this commonly neglected issue and argue that data distribution within each subspace plays a critical role in the success of self-expressive models. Our proposed solution to tackle this issue is motivated by the central role of data augmentation in the generalization power of deep neural networks. We propose two subspace clustering frameworks for both unsupervised and semi-supervised settings that use augmented samples as an enlarged dictionary to improve the quality of the self-expressive representation. We present an automatic augmentation strategy using a few labeled samples for the semi-supervised problem relying on the fact that the data samples lie in the union of multiple linear subspaces. Experimental results confirm the effectiveness of data augmentation, as it significantly improves the performance of general self-expressive model. Subspace clustering is the classical problem of clustering a collection of data samples that approximately lie around several low-dimensional subspaces. The current state-of-the-art approaches for this problem are based on the self-expressive model which represents the samples as linear combination of other samples. However, these approaches require sufficiently well-spread samples for accurate representation which might not be necessarily accessible in many applications. In this paper, we shed light on this commonly neglected issue and argue that data distribution within each subspace plays a critical role in the success of self-expressive models. Our proposed solution to tackle this issue is motivated by the central role of data augmentation in the generalization power of deep neural networks. We propose two subspace clustering frameworks for both unsupervised and semi-supervised settings that use augmented samples as an enlarged dictionary to improve the quality of the self-expressive representation. We present an automatic augmentation strategy using a few labeled samples for the semi-supervised problem relying on the fact that the data samples lie in the union of multiple linear subspaces. Experimental results confirm the effectiveness of data augmentation, as it significantly improves the performance of general self-expressive model. Auto-augmentation Elsevier Subspace clustering Elsevier Data augmentation Elsevier Sparse representation Elsevier Gillis, Nicolas oth Enthalten in Elsevier Science Wang, Jiliang ELSEVIER Subsurface fluid flow at an active cold seep area in the Qiongdongnan Basin, northern South China Sea 2018 Amsterdam [u.a.] (DE-627)ELV001104926 volume:258 year:2022 day:22 month:12 pages:0 https://doi.org/10.1016/j.knosys.2022.109974 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OPC-GGO 38.00 Geowissenschaften: Allgemeines VZ AR 258 2022 22 1222 0 |
language |
English |
source |
Enthalten in Subsurface fluid flow at an active cold seep area in the Qiongdongnan Basin, northern South China Sea Amsterdam [u.a.] volume:258 year:2022 day:22 month:12 pages:0 |
sourceStr |
Enthalten in Subsurface fluid flow at an active cold seep area in the Qiongdongnan Basin, northern South China Sea Amsterdam [u.a.] volume:258 year:2022 day:22 month:12 pages:0 |
format_phy_str_mv |
Article |
bklname |
Geowissenschaften: Allgemeines |
institution |
findex.gbv.de |
topic_facet |
Auto-augmentation Subspace clustering Data augmentation Sparse representation |
dewey-raw |
550 |
isfreeaccess_bool |
false |
container_title |
Subsurface fluid flow at an active cold seep area in the Qiongdongnan Basin, northern South China Sea |
authorswithroles_txt_mv |
Abdolali, Maryam @@aut@@ Gillis, Nicolas @@oth@@ |
publishDateDaySort_date |
2022-01-22T00:00:00Z |
hierarchy_top_id |
ELV001104926 |
dewey-sort |
3550 |
id |
ELV059493313 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV059493313</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626053013.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">221219s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.knosys.2022.109974</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001972.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV059493313</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0950-7051(22)01067-X</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">550</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">38.00</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Abdolali, Maryam</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Revisiting data augmentation for subspace clustering</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022transfer abstract</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Subspace clustering is the classical problem of clustering a collection of data samples that approximately lie around several low-dimensional subspaces. The current state-of-the-art approaches for this problem are based on the self-expressive model which represents the samples as linear combination of other samples. However, these approaches require sufficiently well-spread samples for accurate representation which might not be necessarily accessible in many applications. In this paper, we shed light on this commonly neglected issue and argue that data distribution within each subspace plays a critical role in the success of self-expressive models. Our proposed solution to tackle this issue is motivated by the central role of data augmentation in the generalization power of deep neural networks. We propose two subspace clustering frameworks for both unsupervised and semi-supervised settings that use augmented samples as an enlarged dictionary to improve the quality of the self-expressive representation. We present an automatic augmentation strategy using a few labeled samples for the semi-supervised problem relying on the fact that the data samples lie in the union of multiple linear subspaces. Experimental results confirm the effectiveness of data augmentation, as it significantly improves the performance of general self-expressive model.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Subspace clustering is the classical problem of clustering a collection of data samples that approximately lie around several low-dimensional subspaces. The current state-of-the-art approaches for this problem are based on the self-expressive model which represents the samples as linear combination of other samples. However, these approaches require sufficiently well-spread samples for accurate representation which might not be necessarily accessible in many applications. In this paper, we shed light on this commonly neglected issue and argue that data distribution within each subspace plays a critical role in the success of self-expressive models. Our proposed solution to tackle this issue is motivated by the central role of data augmentation in the generalization power of deep neural networks. We propose two subspace clustering frameworks for both unsupervised and semi-supervised settings that use augmented samples as an enlarged dictionary to improve the quality of the self-expressive representation. We present an automatic augmentation strategy using a few labeled samples for the semi-supervised problem relying on the fact that the data samples lie in the union of multiple linear subspaces. Experimental results confirm the effectiveness of data augmentation, as it significantly improves the performance of general self-expressive model.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Auto-augmentation</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Subspace clustering</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Data augmentation</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Sparse representation</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Gillis, Nicolas</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Elsevier Science</subfield><subfield code="a">Wang, Jiliang ELSEVIER</subfield><subfield code="t">Subsurface fluid flow at an active cold seep area in the Qiongdongnan Basin, northern South China Sea</subfield><subfield code="d">2018</subfield><subfield code="g">Amsterdam [u.a.]</subfield><subfield code="w">(DE-627)ELV001104926</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:258</subfield><subfield code="g">year:2022</subfield><subfield code="g">day:22</subfield><subfield code="g">month:12</subfield><subfield code="g">pages:0</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.knosys.2022.109974</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OPC-GGO</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">38.00</subfield><subfield code="j">Geowissenschaften: Allgemeines</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">258</subfield><subfield code="j">2022</subfield><subfield code="b">22</subfield><subfield code="c">1222</subfield><subfield code="h">0</subfield></datafield></record></collection>
|
author |
Abdolali, Maryam |
spellingShingle |
Abdolali, Maryam ddc 550 bkl 38.00 Elsevier Auto-augmentation Elsevier Subspace clustering Elsevier Data augmentation Elsevier Sparse representation Revisiting data augmentation for subspace clustering |
authorStr |
Abdolali, Maryam |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)ELV001104926 |
format |
electronic Article |
dewey-ones |
550 - Earth sciences |
delete_txt_mv |
keep |
author_role |
aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
topic_title |
550 VZ 38.00 bkl Revisiting data augmentation for subspace clustering Auto-augmentation Elsevier Subspace clustering Elsevier Data augmentation Elsevier Sparse representation Elsevier |
topic |
ddc 550 bkl 38.00 Elsevier Auto-augmentation Elsevier Subspace clustering Elsevier Data augmentation Elsevier Sparse representation |
topic_unstemmed |
ddc 550 bkl 38.00 Elsevier Auto-augmentation Elsevier Subspace clustering Elsevier Data augmentation Elsevier Sparse representation |
topic_browse |
ddc 550 bkl 38.00 Elsevier Auto-augmentation Elsevier Subspace clustering Elsevier Data augmentation Elsevier Sparse representation |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
zu |
author2_variant |
n g ng |
hierarchy_parent_title |
Subsurface fluid flow at an active cold seep area in the Qiongdongnan Basin, northern South China Sea |
hierarchy_parent_id |
ELV001104926 |
dewey-tens |
550 - Earth sciences & geology |
hierarchy_top_title |
Subsurface fluid flow at an active cold seep area in the Qiongdongnan Basin, northern South China Sea |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)ELV001104926 |
title |
Revisiting data augmentation for subspace clustering |
ctrlnum |
(DE-627)ELV059493313 (ELSEVIER)S0950-7051(22)01067-X |
title_full |
Revisiting data augmentation for subspace clustering |
author_sort |
Abdolali, Maryam |
journal |
Subsurface fluid flow at an active cold seep area in the Qiongdongnan Basin, northern South China Sea |
journalStr |
Subsurface fluid flow at an active cold seep area in the Qiongdongnan Basin, northern South China Sea |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
500 - Science |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
zzz |
container_start_page |
0 |
author_browse |
Abdolali, Maryam |
container_volume |
258 |
class |
550 VZ 38.00 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Abdolali, Maryam |
doi_str_mv |
10.1016/j.knosys.2022.109974 |
dewey-full |
550 |
title_sort |
revisiting data augmentation for subspace clustering |
title_auth |
Revisiting data augmentation for subspace clustering |
abstract |
Subspace clustering is the classical problem of clustering a collection of data samples that approximately lie around several low-dimensional subspaces. The current state-of-the-art approaches for this problem are based on the self-expressive model which represents the samples as linear combination of other samples. However, these approaches require sufficiently well-spread samples for accurate representation which might not be necessarily accessible in many applications. In this paper, we shed light on this commonly neglected issue and argue that data distribution within each subspace plays a critical role in the success of self-expressive models. Our proposed solution to tackle this issue is motivated by the central role of data augmentation in the generalization power of deep neural networks. We propose two subspace clustering frameworks for both unsupervised and semi-supervised settings that use augmented samples as an enlarged dictionary to improve the quality of the self-expressive representation. We present an automatic augmentation strategy using a few labeled samples for the semi-supervised problem relying on the fact that the data samples lie in the union of multiple linear subspaces. Experimental results confirm the effectiveness of data augmentation, as it significantly improves the performance of general self-expressive model. |
abstractGer |
Subspace clustering is the classical problem of clustering a collection of data samples that approximately lie around several low-dimensional subspaces. The current state-of-the-art approaches for this problem are based on the self-expressive model which represents the samples as linear combination of other samples. However, these approaches require sufficiently well-spread samples for accurate representation which might not be necessarily accessible in many applications. In this paper, we shed light on this commonly neglected issue and argue that data distribution within each subspace plays a critical role in the success of self-expressive models. Our proposed solution to tackle this issue is motivated by the central role of data augmentation in the generalization power of deep neural networks. We propose two subspace clustering frameworks for both unsupervised and semi-supervised settings that use augmented samples as an enlarged dictionary to improve the quality of the self-expressive representation. We present an automatic augmentation strategy using a few labeled samples for the semi-supervised problem relying on the fact that the data samples lie in the union of multiple linear subspaces. Experimental results confirm the effectiveness of data augmentation, as it significantly improves the performance of general self-expressive model. |
abstract_unstemmed |
Subspace clustering is the classical problem of clustering a collection of data samples that approximately lie around several low-dimensional subspaces. The current state-of-the-art approaches for this problem are based on the self-expressive model which represents the samples as linear combination of other samples. However, these approaches require sufficiently well-spread samples for accurate representation which might not be necessarily accessible in many applications. In this paper, we shed light on this commonly neglected issue and argue that data distribution within each subspace plays a critical role in the success of self-expressive models. Our proposed solution to tackle this issue is motivated by the central role of data augmentation in the generalization power of deep neural networks. We propose two subspace clustering frameworks for both unsupervised and semi-supervised settings that use augmented samples as an enlarged dictionary to improve the quality of the self-expressive representation. We present an automatic augmentation strategy using a few labeled samples for the semi-supervised problem relying on the fact that the data samples lie in the union of multiple linear subspaces. Experimental results confirm the effectiveness of data augmentation, as it significantly improves the performance of general self-expressive model. |
collection_details |
GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OPC-GGO |
title_short |
Revisiting data augmentation for subspace clustering |
url |
https://doi.org/10.1016/j.knosys.2022.109974 |
remote_bool |
true |
author2 |
Gillis, Nicolas |
author2Str |
Gillis, Nicolas |
ppnlink |
ELV001104926 |
mediatype_str_mv |
z |
isOA_txt |
false |
hochschulschrift_bool |
false |
author2_role |
oth |
doi_str |
10.1016/j.knosys.2022.109974 |
up_date |
2024-07-06T22:10:04.043Z |
_version_ |
1803869281630289920 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV059493313</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626053013.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">221219s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.knosys.2022.109974</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001972.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV059493313</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0950-7051(22)01067-X</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">550</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">38.00</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Abdolali, Maryam</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Revisiting data augmentation for subspace clustering</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022transfer abstract</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Subspace clustering is the classical problem of clustering a collection of data samples that approximately lie around several low-dimensional subspaces. The current state-of-the-art approaches for this problem are based on the self-expressive model which represents the samples as linear combination of other samples. However, these approaches require sufficiently well-spread samples for accurate representation which might not be necessarily accessible in many applications. In this paper, we shed light on this commonly neglected issue and argue that data distribution within each subspace plays a critical role in the success of self-expressive models. Our proposed solution to tackle this issue is motivated by the central role of data augmentation in the generalization power of deep neural networks. We propose two subspace clustering frameworks for both unsupervised and semi-supervised settings that use augmented samples as an enlarged dictionary to improve the quality of the self-expressive representation. We present an automatic augmentation strategy using a few labeled samples for the semi-supervised problem relying on the fact that the data samples lie in the union of multiple linear subspaces. Experimental results confirm the effectiveness of data augmentation, as it significantly improves the performance of general self-expressive model.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Subspace clustering is the classical problem of clustering a collection of data samples that approximately lie around several low-dimensional subspaces. The current state-of-the-art approaches for this problem are based on the self-expressive model which represents the samples as linear combination of other samples. However, these approaches require sufficiently well-spread samples for accurate representation which might not be necessarily accessible in many applications. In this paper, we shed light on this commonly neglected issue and argue that data distribution within each subspace plays a critical role in the success of self-expressive models. Our proposed solution to tackle this issue is motivated by the central role of data augmentation in the generalization power of deep neural networks. We propose two subspace clustering frameworks for both unsupervised and semi-supervised settings that use augmented samples as an enlarged dictionary to improve the quality of the self-expressive representation. We present an automatic augmentation strategy using a few labeled samples for the semi-supervised problem relying on the fact that the data samples lie in the union of multiple linear subspaces. Experimental results confirm the effectiveness of data augmentation, as it significantly improves the performance of general self-expressive model.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Auto-augmentation</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Subspace clustering</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Data augmentation</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Sparse representation</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Gillis, Nicolas</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Elsevier Science</subfield><subfield code="a">Wang, Jiliang ELSEVIER</subfield><subfield code="t">Subsurface fluid flow at an active cold seep area in the Qiongdongnan Basin, northern South China Sea</subfield><subfield code="d">2018</subfield><subfield code="g">Amsterdam [u.a.]</subfield><subfield code="w">(DE-627)ELV001104926</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:258</subfield><subfield code="g">year:2022</subfield><subfield code="g">day:22</subfield><subfield code="g">month:12</subfield><subfield code="g">pages:0</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.knosys.2022.109974</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OPC-GGO</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">38.00</subfield><subfield code="j">Geowissenschaften: Allgemeines</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">258</subfield><subfield code="j">2022</subfield><subfield code="b">22</subfield><subfield code="c">1222</subfield><subfield code="h">0</subfield></datafield></record></collection>
|
score |
7.3989973 |