An ensemble-based method for linear feature extraction for two-class problems
Abstract In this paper we propose three variants of a linear feature extraction technique based on Adaboost for two-class classification problems. Unlike other feature extraction techniques, we do not make any assumptions about the distribution of the data. At each boosting step we select from a poo...
Ausführliche Beschreibung
Autor*in: |
Masip, David [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2005 |
---|
Schlagwörter: |
---|
Anmerkung: |
© Springer-Verlag London Limited 2005 |
---|
Übergeordnetes Werk: |
Enthalten in: Pattern analysis and applications - Springer-Verlag, 1998, 8(2005), 3 vom: 27. Sept., Seite 227-237 |
---|---|
Übergeordnetes Werk: |
volume:8 ; year:2005 ; number:3 ; day:27 ; month:09 ; pages:227-237 |
Links: |
---|
DOI / URN: |
10.1007/s10044-005-0002-x |
---|
Katalog-ID: |
OLC2051696519 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | OLC2051696519 | ||
003 | DE-627 | ||
005 | 20230502161349.0 | ||
007 | tu | ||
008 | 200819s2005 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s10044-005-0002-x |2 doi | |
035 | |a (DE-627)OLC2051696519 | ||
035 | |a (DE-He213)s10044-005-0002-x-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |a 600 |q VZ |
084 | |a 54.74$jMaschinelles Sehen |2 bkl | ||
100 | 1 | |a Masip, David |e verfasserin |4 aut | |
245 | 1 | 0 | |a An ensemble-based method for linear feature extraction for two-class problems |
264 | 1 | |c 2005 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © Springer-Verlag London Limited 2005 | ||
520 | |a Abstract In this paper we propose three variants of a linear feature extraction technique based on Adaboost for two-class classification problems. Unlike other feature extraction techniques, we do not make any assumptions about the distribution of the data. At each boosting step we select from a pool of linear projections the one that minimizes the weighted error. We propose three different variants of the feature extraction algorithm, depending on the way the pool of individual projections is constructed. Using nine real and two artificial data sets of different original dimensionality and sample size we compare the performance of the three proposed techniques with three classical techniques for linear feature extraction: Fisher linear discriminant analysis (FLD), Nonparametric discriminant analysis (NDA) and a recently proposed feature extraction method for heteroscedastic data based on the Chernoff criterion. Our results show that for data sets of relatively low-original dimensionality FLD appears to be both the most accurate and the most economical feature extraction method (giving just one-dimension in the case of two classes). The techniques based on Adaboost fare better than the classical techniques for data sets of large original dimensionality. | ||
650 | 4 | |a Feature Extraction | |
650 | 4 | |a Feature Extraction Method | |
650 | 4 | |a Feature Extraction Technique | |
650 | 4 | |a Feature Extraction Algorithm | |
650 | 4 | |a Fisher Linear Discriminant | |
700 | 1 | |a Kuncheva, Ludmila I. |4 aut | |
700 | 1 | |a Vitrià, Jordi |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Pattern analysis and applications |d Springer-Verlag, 1998 |g 8(2005), 3 vom: 27. Sept., Seite 227-237 |w (DE-627)24992921X |w (DE-600)1446989-3 |w (DE-576)27655583X |x 1433-7541 |7 nnns |
773 | 1 | 8 | |g volume:8 |g year:2005 |g number:3 |g day:27 |g month:09 |g pages:227-237 |
856 | 4 | 1 | |u https://doi.org/10.1007/s10044-005-0002-x |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-MAT | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_4277 | ||
936 | b | k | |a 54.74$jMaschinelles Sehen |q VZ |0 10641030X |0 (DE-625)10641030X |
951 | |a AR | ||
952 | |d 8 |j 2005 |e 3 |b 27 |c 09 |h 227-237 |
author_variant |
d m dm l i k li lik j v jv |
---|---|
matchkey_str |
article:14337541:2005----::nnebeaemtofrieretretatof |
hierarchy_sort_str |
2005 |
bklnumber |
54.74$jMaschinelles Sehen |
publishDate |
2005 |
allfields |
10.1007/s10044-005-0002-x doi (DE-627)OLC2051696519 (DE-He213)s10044-005-0002-x-p DE-627 ger DE-627 rakwb eng 004 600 VZ 54.74$jMaschinelles Sehen bkl Masip, David verfasserin aut An ensemble-based method for linear feature extraction for two-class problems 2005 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer-Verlag London Limited 2005 Abstract In this paper we propose three variants of a linear feature extraction technique based on Adaboost for two-class classification problems. Unlike other feature extraction techniques, we do not make any assumptions about the distribution of the data. At each boosting step we select from a pool of linear projections the one that minimizes the weighted error. We propose three different variants of the feature extraction algorithm, depending on the way the pool of individual projections is constructed. Using nine real and two artificial data sets of different original dimensionality and sample size we compare the performance of the three proposed techniques with three classical techniques for linear feature extraction: Fisher linear discriminant analysis (FLD), Nonparametric discriminant analysis (NDA) and a recently proposed feature extraction method for heteroscedastic data based on the Chernoff criterion. Our results show that for data sets of relatively low-original dimensionality FLD appears to be both the most accurate and the most economical feature extraction method (giving just one-dimension in the case of two classes). The techniques based on Adaboost fare better than the classical techniques for data sets of large original dimensionality. Feature Extraction Feature Extraction Method Feature Extraction Technique Feature Extraction Algorithm Fisher Linear Discriminant Kuncheva, Ludmila I. aut Vitrià, Jordi aut Enthalten in Pattern analysis and applications Springer-Verlag, 1998 8(2005), 3 vom: 27. Sept., Seite 227-237 (DE-627)24992921X (DE-600)1446989-3 (DE-576)27655583X 1433-7541 nnns volume:8 year:2005 number:3 day:27 month:09 pages:227-237 https://doi.org/10.1007/s10044-005-0002-x lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_70 GBV_ILN_4277 54.74$jMaschinelles Sehen VZ 10641030X (DE-625)10641030X AR 8 2005 3 27 09 227-237 |
spelling |
10.1007/s10044-005-0002-x doi (DE-627)OLC2051696519 (DE-He213)s10044-005-0002-x-p DE-627 ger DE-627 rakwb eng 004 600 VZ 54.74$jMaschinelles Sehen bkl Masip, David verfasserin aut An ensemble-based method for linear feature extraction for two-class problems 2005 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer-Verlag London Limited 2005 Abstract In this paper we propose three variants of a linear feature extraction technique based on Adaboost for two-class classification problems. Unlike other feature extraction techniques, we do not make any assumptions about the distribution of the data. At each boosting step we select from a pool of linear projections the one that minimizes the weighted error. We propose three different variants of the feature extraction algorithm, depending on the way the pool of individual projections is constructed. Using nine real and two artificial data sets of different original dimensionality and sample size we compare the performance of the three proposed techniques with three classical techniques for linear feature extraction: Fisher linear discriminant analysis (FLD), Nonparametric discriminant analysis (NDA) and a recently proposed feature extraction method for heteroscedastic data based on the Chernoff criterion. Our results show that for data sets of relatively low-original dimensionality FLD appears to be both the most accurate and the most economical feature extraction method (giving just one-dimension in the case of two classes). The techniques based on Adaboost fare better than the classical techniques for data sets of large original dimensionality. Feature Extraction Feature Extraction Method Feature Extraction Technique Feature Extraction Algorithm Fisher Linear Discriminant Kuncheva, Ludmila I. aut Vitrià, Jordi aut Enthalten in Pattern analysis and applications Springer-Verlag, 1998 8(2005), 3 vom: 27. Sept., Seite 227-237 (DE-627)24992921X (DE-600)1446989-3 (DE-576)27655583X 1433-7541 nnns volume:8 year:2005 number:3 day:27 month:09 pages:227-237 https://doi.org/10.1007/s10044-005-0002-x lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_70 GBV_ILN_4277 54.74$jMaschinelles Sehen VZ 10641030X (DE-625)10641030X AR 8 2005 3 27 09 227-237 |
allfields_unstemmed |
10.1007/s10044-005-0002-x doi (DE-627)OLC2051696519 (DE-He213)s10044-005-0002-x-p DE-627 ger DE-627 rakwb eng 004 600 VZ 54.74$jMaschinelles Sehen bkl Masip, David verfasserin aut An ensemble-based method for linear feature extraction for two-class problems 2005 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer-Verlag London Limited 2005 Abstract In this paper we propose three variants of a linear feature extraction technique based on Adaboost for two-class classification problems. Unlike other feature extraction techniques, we do not make any assumptions about the distribution of the data. At each boosting step we select from a pool of linear projections the one that minimizes the weighted error. We propose three different variants of the feature extraction algorithm, depending on the way the pool of individual projections is constructed. Using nine real and two artificial data sets of different original dimensionality and sample size we compare the performance of the three proposed techniques with three classical techniques for linear feature extraction: Fisher linear discriminant analysis (FLD), Nonparametric discriminant analysis (NDA) and a recently proposed feature extraction method for heteroscedastic data based on the Chernoff criterion. Our results show that for data sets of relatively low-original dimensionality FLD appears to be both the most accurate and the most economical feature extraction method (giving just one-dimension in the case of two classes). The techniques based on Adaboost fare better than the classical techniques for data sets of large original dimensionality. Feature Extraction Feature Extraction Method Feature Extraction Technique Feature Extraction Algorithm Fisher Linear Discriminant Kuncheva, Ludmila I. aut Vitrià, Jordi aut Enthalten in Pattern analysis and applications Springer-Verlag, 1998 8(2005), 3 vom: 27. Sept., Seite 227-237 (DE-627)24992921X (DE-600)1446989-3 (DE-576)27655583X 1433-7541 nnns volume:8 year:2005 number:3 day:27 month:09 pages:227-237 https://doi.org/10.1007/s10044-005-0002-x lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_70 GBV_ILN_4277 54.74$jMaschinelles Sehen VZ 10641030X (DE-625)10641030X AR 8 2005 3 27 09 227-237 |
allfieldsGer |
10.1007/s10044-005-0002-x doi (DE-627)OLC2051696519 (DE-He213)s10044-005-0002-x-p DE-627 ger DE-627 rakwb eng 004 600 VZ 54.74$jMaschinelles Sehen bkl Masip, David verfasserin aut An ensemble-based method for linear feature extraction for two-class problems 2005 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer-Verlag London Limited 2005 Abstract In this paper we propose three variants of a linear feature extraction technique based on Adaboost for two-class classification problems. Unlike other feature extraction techniques, we do not make any assumptions about the distribution of the data. At each boosting step we select from a pool of linear projections the one that minimizes the weighted error. We propose three different variants of the feature extraction algorithm, depending on the way the pool of individual projections is constructed. Using nine real and two artificial data sets of different original dimensionality and sample size we compare the performance of the three proposed techniques with three classical techniques for linear feature extraction: Fisher linear discriminant analysis (FLD), Nonparametric discriminant analysis (NDA) and a recently proposed feature extraction method for heteroscedastic data based on the Chernoff criterion. Our results show that for data sets of relatively low-original dimensionality FLD appears to be both the most accurate and the most economical feature extraction method (giving just one-dimension in the case of two classes). The techniques based on Adaboost fare better than the classical techniques for data sets of large original dimensionality. Feature Extraction Feature Extraction Method Feature Extraction Technique Feature Extraction Algorithm Fisher Linear Discriminant Kuncheva, Ludmila I. aut Vitrià, Jordi aut Enthalten in Pattern analysis and applications Springer-Verlag, 1998 8(2005), 3 vom: 27. Sept., Seite 227-237 (DE-627)24992921X (DE-600)1446989-3 (DE-576)27655583X 1433-7541 nnns volume:8 year:2005 number:3 day:27 month:09 pages:227-237 https://doi.org/10.1007/s10044-005-0002-x lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_70 GBV_ILN_4277 54.74$jMaschinelles Sehen VZ 10641030X (DE-625)10641030X AR 8 2005 3 27 09 227-237 |
allfieldsSound |
10.1007/s10044-005-0002-x doi (DE-627)OLC2051696519 (DE-He213)s10044-005-0002-x-p DE-627 ger DE-627 rakwb eng 004 600 VZ 54.74$jMaschinelles Sehen bkl Masip, David verfasserin aut An ensemble-based method for linear feature extraction for two-class problems 2005 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer-Verlag London Limited 2005 Abstract In this paper we propose three variants of a linear feature extraction technique based on Adaboost for two-class classification problems. Unlike other feature extraction techniques, we do not make any assumptions about the distribution of the data. At each boosting step we select from a pool of linear projections the one that minimizes the weighted error. We propose three different variants of the feature extraction algorithm, depending on the way the pool of individual projections is constructed. Using nine real and two artificial data sets of different original dimensionality and sample size we compare the performance of the three proposed techniques with three classical techniques for linear feature extraction: Fisher linear discriminant analysis (FLD), Nonparametric discriminant analysis (NDA) and a recently proposed feature extraction method for heteroscedastic data based on the Chernoff criterion. Our results show that for data sets of relatively low-original dimensionality FLD appears to be both the most accurate and the most economical feature extraction method (giving just one-dimension in the case of two classes). The techniques based on Adaboost fare better than the classical techniques for data sets of large original dimensionality. Feature Extraction Feature Extraction Method Feature Extraction Technique Feature Extraction Algorithm Fisher Linear Discriminant Kuncheva, Ludmila I. aut Vitrià, Jordi aut Enthalten in Pattern analysis and applications Springer-Verlag, 1998 8(2005), 3 vom: 27. Sept., Seite 227-237 (DE-627)24992921X (DE-600)1446989-3 (DE-576)27655583X 1433-7541 nnns volume:8 year:2005 number:3 day:27 month:09 pages:227-237 https://doi.org/10.1007/s10044-005-0002-x lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_70 GBV_ILN_4277 54.74$jMaschinelles Sehen VZ 10641030X (DE-625)10641030X AR 8 2005 3 27 09 227-237 |
language |
English |
source |
Enthalten in Pattern analysis and applications 8(2005), 3 vom: 27. Sept., Seite 227-237 volume:8 year:2005 number:3 day:27 month:09 pages:227-237 |
sourceStr |
Enthalten in Pattern analysis and applications 8(2005), 3 vom: 27. Sept., Seite 227-237 volume:8 year:2005 number:3 day:27 month:09 pages:227-237 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Feature Extraction Feature Extraction Method Feature Extraction Technique Feature Extraction Algorithm Fisher Linear Discriminant |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
Pattern analysis and applications |
authorswithroles_txt_mv |
Masip, David @@aut@@ Kuncheva, Ludmila I. @@aut@@ Vitrià, Jordi @@aut@@ |
publishDateDaySort_date |
2005-09-27T00:00:00Z |
hierarchy_top_id |
24992921X |
dewey-sort |
14 |
id |
OLC2051696519 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2051696519</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230502161349.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200819s2005 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s10044-005-0002-x</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2051696519</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s10044-005-0002-x-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="a">600</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.74$jMaschinelles Sehen</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Masip, David</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">An ensemble-based method for linear feature extraction for two-class problems</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2005</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer-Verlag London Limited 2005</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract In this paper we propose three variants of a linear feature extraction technique based on Adaboost for two-class classification problems. Unlike other feature extraction techniques, we do not make any assumptions about the distribution of the data. At each boosting step we select from a pool of linear projections the one that minimizes the weighted error. We propose three different variants of the feature extraction algorithm, depending on the way the pool of individual projections is constructed. Using nine real and two artificial data sets of different original dimensionality and sample size we compare the performance of the three proposed techniques with three classical techniques for linear feature extraction: Fisher linear discriminant analysis (FLD), Nonparametric discriminant analysis (NDA) and a recently proposed feature extraction method for heteroscedastic data based on the Chernoff criterion. Our results show that for data sets of relatively low-original dimensionality FLD appears to be both the most accurate and the most economical feature extraction method (giving just one-dimension in the case of two classes). The techniques based on Adaboost fare better than the classical techniques for data sets of large original dimensionality.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Feature Extraction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Feature Extraction Method</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Feature Extraction Technique</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Feature Extraction Algorithm</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Fisher Linear Discriminant</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kuncheva, Ludmila I.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Vitrià, Jordi</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Pattern analysis and applications</subfield><subfield code="d">Springer-Verlag, 1998</subfield><subfield code="g">8(2005), 3 vom: 27. Sept., Seite 227-237</subfield><subfield code="w">(DE-627)24992921X</subfield><subfield code="w">(DE-600)1446989-3</subfield><subfield code="w">(DE-576)27655583X</subfield><subfield code="x">1433-7541</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:8</subfield><subfield code="g">year:2005</subfield><subfield code="g">number:3</subfield><subfield code="g">day:27</subfield><subfield code="g">month:09</subfield><subfield code="g">pages:227-237</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s10044-005-0002-x</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4277</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.74$jMaschinelles Sehen</subfield><subfield code="q">VZ</subfield><subfield code="0">10641030X</subfield><subfield code="0">(DE-625)10641030X</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">8</subfield><subfield code="j">2005</subfield><subfield code="e">3</subfield><subfield code="b">27</subfield><subfield code="c">09</subfield><subfield code="h">227-237</subfield></datafield></record></collection>
|
author |
Masip, David |
spellingShingle |
Masip, David ddc 004 bkl 54.74$jMaschinelles Sehen misc Feature Extraction misc Feature Extraction Method misc Feature Extraction Technique misc Feature Extraction Algorithm misc Fisher Linear Discriminant An ensemble-based method for linear feature extraction for two-class problems |
authorStr |
Masip, David |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)24992921X |
format |
Article |
dewey-ones |
004 - Data processing & computer science 600 - Technology |
delete_txt_mv |
keep |
author_role |
aut aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
1433-7541 |
topic_title |
004 600 VZ 54.74$jMaschinelles Sehen bkl An ensemble-based method for linear feature extraction for two-class problems Feature Extraction Feature Extraction Method Feature Extraction Technique Feature Extraction Algorithm Fisher Linear Discriminant |
topic |
ddc 004 bkl 54.74$jMaschinelles Sehen misc Feature Extraction misc Feature Extraction Method misc Feature Extraction Technique misc Feature Extraction Algorithm misc Fisher Linear Discriminant |
topic_unstemmed |
ddc 004 bkl 54.74$jMaschinelles Sehen misc Feature Extraction misc Feature Extraction Method misc Feature Extraction Technique misc Feature Extraction Algorithm misc Fisher Linear Discriminant |
topic_browse |
ddc 004 bkl 54.74$jMaschinelles Sehen misc Feature Extraction misc Feature Extraction Method misc Feature Extraction Technique misc Feature Extraction Algorithm misc Fisher Linear Discriminant |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
Pattern analysis and applications |
hierarchy_parent_id |
24992921X |
dewey-tens |
000 - Computer science, knowledge & systems 600 - Technology |
hierarchy_top_title |
Pattern analysis and applications |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)24992921X (DE-600)1446989-3 (DE-576)27655583X |
title |
An ensemble-based method for linear feature extraction for two-class problems |
ctrlnum |
(DE-627)OLC2051696519 (DE-He213)s10044-005-0002-x-p |
title_full |
An ensemble-based method for linear feature extraction for two-class problems |
author_sort |
Masip, David |
journal |
Pattern analysis and applications |
journalStr |
Pattern analysis and applications |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works 600 - Technology |
recordtype |
marc |
publishDateSort |
2005 |
contenttype_str_mv |
txt |
container_start_page |
227 |
author_browse |
Masip, David Kuncheva, Ludmila I. Vitrià, Jordi |
container_volume |
8 |
class |
004 600 VZ 54.74$jMaschinelles Sehen bkl |
format_se |
Aufsätze |
author-letter |
Masip, David |
doi_str_mv |
10.1007/s10044-005-0002-x |
normlink |
10641030X |
normlink_prefix_str_mv |
10641030X (DE-625)10641030X |
dewey-full |
004 600 |
title_sort |
an ensemble-based method for linear feature extraction for two-class problems |
title_auth |
An ensemble-based method for linear feature extraction for two-class problems |
abstract |
Abstract In this paper we propose three variants of a linear feature extraction technique based on Adaboost for two-class classification problems. Unlike other feature extraction techniques, we do not make any assumptions about the distribution of the data. At each boosting step we select from a pool of linear projections the one that minimizes the weighted error. We propose three different variants of the feature extraction algorithm, depending on the way the pool of individual projections is constructed. Using nine real and two artificial data sets of different original dimensionality and sample size we compare the performance of the three proposed techniques with three classical techniques for linear feature extraction: Fisher linear discriminant analysis (FLD), Nonparametric discriminant analysis (NDA) and a recently proposed feature extraction method for heteroscedastic data based on the Chernoff criterion. Our results show that for data sets of relatively low-original dimensionality FLD appears to be both the most accurate and the most economical feature extraction method (giving just one-dimension in the case of two classes). The techniques based on Adaboost fare better than the classical techniques for data sets of large original dimensionality. © Springer-Verlag London Limited 2005 |
abstractGer |
Abstract In this paper we propose three variants of a linear feature extraction technique based on Adaboost for two-class classification problems. Unlike other feature extraction techniques, we do not make any assumptions about the distribution of the data. At each boosting step we select from a pool of linear projections the one that minimizes the weighted error. We propose three different variants of the feature extraction algorithm, depending on the way the pool of individual projections is constructed. Using nine real and two artificial data sets of different original dimensionality and sample size we compare the performance of the three proposed techniques with three classical techniques for linear feature extraction: Fisher linear discriminant analysis (FLD), Nonparametric discriminant analysis (NDA) and a recently proposed feature extraction method for heteroscedastic data based on the Chernoff criterion. Our results show that for data sets of relatively low-original dimensionality FLD appears to be both the most accurate and the most economical feature extraction method (giving just one-dimension in the case of two classes). The techniques based on Adaboost fare better than the classical techniques for data sets of large original dimensionality. © Springer-Verlag London Limited 2005 |
abstract_unstemmed |
Abstract In this paper we propose three variants of a linear feature extraction technique based on Adaboost for two-class classification problems. Unlike other feature extraction techniques, we do not make any assumptions about the distribution of the data. At each boosting step we select from a pool of linear projections the one that minimizes the weighted error. We propose three different variants of the feature extraction algorithm, depending on the way the pool of individual projections is constructed. Using nine real and two artificial data sets of different original dimensionality and sample size we compare the performance of the three proposed techniques with three classical techniques for linear feature extraction: Fisher linear discriminant analysis (FLD), Nonparametric discriminant analysis (NDA) and a recently proposed feature extraction method for heteroscedastic data based on the Chernoff criterion. Our results show that for data sets of relatively low-original dimensionality FLD appears to be both the most accurate and the most economical feature extraction method (giving just one-dimension in the case of two classes). The techniques based on Adaboost fare better than the classical techniques for data sets of large original dimensionality. © Springer-Verlag London Limited 2005 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_70 GBV_ILN_4277 |
container_issue |
3 |
title_short |
An ensemble-based method for linear feature extraction for two-class problems |
url |
https://doi.org/10.1007/s10044-005-0002-x |
remote_bool |
false |
author2 |
Kuncheva, Ludmila I. Vitrià, Jordi |
author2Str |
Kuncheva, Ludmila I. Vitrià, Jordi |
ppnlink |
24992921X |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s10044-005-0002-x |
up_date |
2024-07-04T05:03:44.503Z |
_version_ |
1803623516857171968 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2051696519</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230502161349.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200819s2005 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s10044-005-0002-x</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2051696519</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s10044-005-0002-x-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="a">600</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.74$jMaschinelles Sehen</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Masip, David</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">An ensemble-based method for linear feature extraction for two-class problems</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2005</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer-Verlag London Limited 2005</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract In this paper we propose three variants of a linear feature extraction technique based on Adaboost for two-class classification problems. Unlike other feature extraction techniques, we do not make any assumptions about the distribution of the data. At each boosting step we select from a pool of linear projections the one that minimizes the weighted error. We propose three different variants of the feature extraction algorithm, depending on the way the pool of individual projections is constructed. Using nine real and two artificial data sets of different original dimensionality and sample size we compare the performance of the three proposed techniques with three classical techniques for linear feature extraction: Fisher linear discriminant analysis (FLD), Nonparametric discriminant analysis (NDA) and a recently proposed feature extraction method for heteroscedastic data based on the Chernoff criterion. Our results show that for data sets of relatively low-original dimensionality FLD appears to be both the most accurate and the most economical feature extraction method (giving just one-dimension in the case of two classes). The techniques based on Adaboost fare better than the classical techniques for data sets of large original dimensionality.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Feature Extraction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Feature Extraction Method</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Feature Extraction Technique</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Feature Extraction Algorithm</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Fisher Linear Discriminant</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kuncheva, Ludmila I.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Vitrià, Jordi</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Pattern analysis and applications</subfield><subfield code="d">Springer-Verlag, 1998</subfield><subfield code="g">8(2005), 3 vom: 27. Sept., Seite 227-237</subfield><subfield code="w">(DE-627)24992921X</subfield><subfield code="w">(DE-600)1446989-3</subfield><subfield code="w">(DE-576)27655583X</subfield><subfield code="x">1433-7541</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:8</subfield><subfield code="g">year:2005</subfield><subfield code="g">number:3</subfield><subfield code="g">day:27</subfield><subfield code="g">month:09</subfield><subfield code="g">pages:227-237</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s10044-005-0002-x</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4277</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.74$jMaschinelles Sehen</subfield><subfield code="q">VZ</subfield><subfield code="0">10641030X</subfield><subfield code="0">(DE-625)10641030X</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">8</subfield><subfield code="j">2005</subfield><subfield code="e">3</subfield><subfield code="b">27</subfield><subfield code="c">09</subfield><subfield code="h">227-237</subfield></datafield></record></collection>
|
score |
7.397331 |