On fusing the latent deep CNN feature for image classification
Abstract Image classification, which aims at assigning a semantic category to images, has been extensively studied during the past few years. More recently, convolution neural network arises and has achieved very promising achievement. Compared with traditional feature extraction techniques (e.g., S...
Ausführliche Beschreibung
Autor*in: |
Liu, Xueliang [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2018 |
---|
Schlagwörter: |
---|
Anmerkung: |
© Springer Science+Business Media, LLC, part of Springer Nature 2018. corrected publication 2018 |
---|
Übergeordnetes Werk: |
Enthalten in: World wide web - Springer US, 1998, 22(2018), 2 vom: 15. Juni, Seite 423-436 |
---|---|
Übergeordnetes Werk: |
volume:22 ; year:2018 ; number:2 ; day:15 ; month:06 ; pages:423-436 |
Links: |
---|
DOI / URN: |
10.1007/s11280-018-0600-3 |
---|
Katalog-ID: |
OLC2062250681 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | OLC2062250681 | ||
003 | DE-627 | ||
005 | 20230504080411.0 | ||
007 | tu | ||
008 | 200819s2018 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s11280-018-0600-3 |2 doi | |
035 | |a (DE-627)OLC2062250681 | ||
035 | |a (DE-He213)s11280-018-0600-3-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |q VZ |
084 | |a 24,1 |2 ssgn | ||
084 | |a 54.84$jWebmanagement |2 bkl | ||
084 | |a 06.74$jInformationssysteme |2 bkl | ||
100 | 1 | |a Liu, Xueliang |e verfasserin |4 aut | |
245 | 1 | 0 | |a On fusing the latent deep CNN feature for image classification |
264 | 1 | |c 2018 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © Springer Science+Business Media, LLC, part of Springer Nature 2018. corrected publication 2018 | ||
520 | |a Abstract Image classification, which aims at assigning a semantic category to images, has been extensively studied during the past few years. More recently, convolution neural network arises and has achieved very promising achievement. Compared with traditional feature extraction techniques (e.g., SIFT, HOG, GIST), the convolutional neural network can extract features from image automatically and does not need hand designed features. However, how to further improve the classification algorithm is still challenging in academic research. The latest research on CNN shows that the features extracted from middle layers is representative, which shows a possible way to improve the classification accuracy. Based on the observation, in this paper, we propose a method to fuse the latent features extracted from the middle layers in a CNN to train a more robust classifier. First, we utilize the pretrained CNN models to extract visual features from middle layer. Then, we use supervised learning method to train classifiers for each feature respectively. Finally, we use the late fusion strategy to combine the prediction of these classifiers. We evaluate the proposal with different classification methods under some several images benchmarks, and the results demonstrate that the proposed method can improve the performance effectively. | ||
650 | 4 | |a Image classification | |
650 | 4 | |a Convolutional neural network | |
650 | 4 | |a Late fusion | |
700 | 1 | |a Zhang, Rongjie |4 aut | |
700 | 1 | |a Meng, Zhijun |0 (orcid)0000-0003-3163-5888 |4 aut | |
700 | 1 | |a Hong, Richang |4 aut | |
700 | 1 | |a Liu, Guangcan |4 aut | |
773 | 0 | 8 | |i Enthalten in |t World wide web |d Springer US, 1998 |g 22(2018), 2 vom: 15. Juni, Seite 423-436 |w (DE-627)301184976 |w (DE-600)1485096-5 |w (DE-576)9301184974 |x 1386-145X |7 nnns |
773 | 1 | 8 | |g volume:22 |g year:2018 |g number:2 |g day:15 |g month:06 |g pages:423-436 |
856 | 4 | 1 | |u https://doi.org/10.1007/s11280-018-0600-3 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-BUB | ||
912 | |a SSG-OLC-MAT | ||
912 | |a SSG-OPC-BBI | ||
912 | |a GBV_ILN_70 | ||
936 | b | k | |a 54.84$jWebmanagement |q VZ |0 475288947 |0 (DE-625)475288947 |
936 | b | k | |a 06.74$jInformationssysteme |q VZ |0 106415212 |0 (DE-625)106415212 |
951 | |a AR | ||
952 | |d 22 |j 2018 |e 2 |b 15 |c 06 |h 423-436 |
author_variant |
x l xl r z rz z m zm r h rh g l gl |
---|---|
matchkey_str |
article:1386145X:2018----::nuighltndecnetrfrmg |
hierarchy_sort_str |
2018 |
bklnumber |
54.84$jWebmanagement 06.74$jInformationssysteme |
publishDate |
2018 |
allfields |
10.1007/s11280-018-0600-3 doi (DE-627)OLC2062250681 (DE-He213)s11280-018-0600-3-p DE-627 ger DE-627 rakwb eng 004 VZ 24,1 ssgn 54.84$jWebmanagement bkl 06.74$jInformationssysteme bkl Liu, Xueliang verfasserin aut On fusing the latent deep CNN feature for image classification 2018 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2018. corrected publication 2018 Abstract Image classification, which aims at assigning a semantic category to images, has been extensively studied during the past few years. More recently, convolution neural network arises and has achieved very promising achievement. Compared with traditional feature extraction techniques (e.g., SIFT, HOG, GIST), the convolutional neural network can extract features from image automatically and does not need hand designed features. However, how to further improve the classification algorithm is still challenging in academic research. The latest research on CNN shows that the features extracted from middle layers is representative, which shows a possible way to improve the classification accuracy. Based on the observation, in this paper, we propose a method to fuse the latent features extracted from the middle layers in a CNN to train a more robust classifier. First, we utilize the pretrained CNN models to extract visual features from middle layer. Then, we use supervised learning method to train classifiers for each feature respectively. Finally, we use the late fusion strategy to combine the prediction of these classifiers. We evaluate the proposal with different classification methods under some several images benchmarks, and the results demonstrate that the proposed method can improve the performance effectively. Image classification Convolutional neural network Late fusion Zhang, Rongjie aut Meng, Zhijun (orcid)0000-0003-3163-5888 aut Hong, Richang aut Liu, Guangcan aut Enthalten in World wide web Springer US, 1998 22(2018), 2 vom: 15. Juni, Seite 423-436 (DE-627)301184976 (DE-600)1485096-5 (DE-576)9301184974 1386-145X nnns volume:22 year:2018 number:2 day:15 month:06 pages:423-436 https://doi.org/10.1007/s11280-018-0600-3 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-BUB SSG-OLC-MAT SSG-OPC-BBI GBV_ILN_70 54.84$jWebmanagement VZ 475288947 (DE-625)475288947 06.74$jInformationssysteme VZ 106415212 (DE-625)106415212 AR 22 2018 2 15 06 423-436 |
spelling |
10.1007/s11280-018-0600-3 doi (DE-627)OLC2062250681 (DE-He213)s11280-018-0600-3-p DE-627 ger DE-627 rakwb eng 004 VZ 24,1 ssgn 54.84$jWebmanagement bkl 06.74$jInformationssysteme bkl Liu, Xueliang verfasserin aut On fusing the latent deep CNN feature for image classification 2018 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2018. corrected publication 2018 Abstract Image classification, which aims at assigning a semantic category to images, has been extensively studied during the past few years. More recently, convolution neural network arises and has achieved very promising achievement. Compared with traditional feature extraction techniques (e.g., SIFT, HOG, GIST), the convolutional neural network can extract features from image automatically and does not need hand designed features. However, how to further improve the classification algorithm is still challenging in academic research. The latest research on CNN shows that the features extracted from middle layers is representative, which shows a possible way to improve the classification accuracy. Based on the observation, in this paper, we propose a method to fuse the latent features extracted from the middle layers in a CNN to train a more robust classifier. First, we utilize the pretrained CNN models to extract visual features from middle layer. Then, we use supervised learning method to train classifiers for each feature respectively. Finally, we use the late fusion strategy to combine the prediction of these classifiers. We evaluate the proposal with different classification methods under some several images benchmarks, and the results demonstrate that the proposed method can improve the performance effectively. Image classification Convolutional neural network Late fusion Zhang, Rongjie aut Meng, Zhijun (orcid)0000-0003-3163-5888 aut Hong, Richang aut Liu, Guangcan aut Enthalten in World wide web Springer US, 1998 22(2018), 2 vom: 15. Juni, Seite 423-436 (DE-627)301184976 (DE-600)1485096-5 (DE-576)9301184974 1386-145X nnns volume:22 year:2018 number:2 day:15 month:06 pages:423-436 https://doi.org/10.1007/s11280-018-0600-3 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-BUB SSG-OLC-MAT SSG-OPC-BBI GBV_ILN_70 54.84$jWebmanagement VZ 475288947 (DE-625)475288947 06.74$jInformationssysteme VZ 106415212 (DE-625)106415212 AR 22 2018 2 15 06 423-436 |
allfields_unstemmed |
10.1007/s11280-018-0600-3 doi (DE-627)OLC2062250681 (DE-He213)s11280-018-0600-3-p DE-627 ger DE-627 rakwb eng 004 VZ 24,1 ssgn 54.84$jWebmanagement bkl 06.74$jInformationssysteme bkl Liu, Xueliang verfasserin aut On fusing the latent deep CNN feature for image classification 2018 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2018. corrected publication 2018 Abstract Image classification, which aims at assigning a semantic category to images, has been extensively studied during the past few years. More recently, convolution neural network arises and has achieved very promising achievement. Compared with traditional feature extraction techniques (e.g., SIFT, HOG, GIST), the convolutional neural network can extract features from image automatically and does not need hand designed features. However, how to further improve the classification algorithm is still challenging in academic research. The latest research on CNN shows that the features extracted from middle layers is representative, which shows a possible way to improve the classification accuracy. Based on the observation, in this paper, we propose a method to fuse the latent features extracted from the middle layers in a CNN to train a more robust classifier. First, we utilize the pretrained CNN models to extract visual features from middle layer. Then, we use supervised learning method to train classifiers for each feature respectively. Finally, we use the late fusion strategy to combine the prediction of these classifiers. We evaluate the proposal with different classification methods under some several images benchmarks, and the results demonstrate that the proposed method can improve the performance effectively. Image classification Convolutional neural network Late fusion Zhang, Rongjie aut Meng, Zhijun (orcid)0000-0003-3163-5888 aut Hong, Richang aut Liu, Guangcan aut Enthalten in World wide web Springer US, 1998 22(2018), 2 vom: 15. Juni, Seite 423-436 (DE-627)301184976 (DE-600)1485096-5 (DE-576)9301184974 1386-145X nnns volume:22 year:2018 number:2 day:15 month:06 pages:423-436 https://doi.org/10.1007/s11280-018-0600-3 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-BUB SSG-OLC-MAT SSG-OPC-BBI GBV_ILN_70 54.84$jWebmanagement VZ 475288947 (DE-625)475288947 06.74$jInformationssysteme VZ 106415212 (DE-625)106415212 AR 22 2018 2 15 06 423-436 |
allfieldsGer |
10.1007/s11280-018-0600-3 doi (DE-627)OLC2062250681 (DE-He213)s11280-018-0600-3-p DE-627 ger DE-627 rakwb eng 004 VZ 24,1 ssgn 54.84$jWebmanagement bkl 06.74$jInformationssysteme bkl Liu, Xueliang verfasserin aut On fusing the latent deep CNN feature for image classification 2018 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2018. corrected publication 2018 Abstract Image classification, which aims at assigning a semantic category to images, has been extensively studied during the past few years. More recently, convolution neural network arises and has achieved very promising achievement. Compared with traditional feature extraction techniques (e.g., SIFT, HOG, GIST), the convolutional neural network can extract features from image automatically and does not need hand designed features. However, how to further improve the classification algorithm is still challenging in academic research. The latest research on CNN shows that the features extracted from middle layers is representative, which shows a possible way to improve the classification accuracy. Based on the observation, in this paper, we propose a method to fuse the latent features extracted from the middle layers in a CNN to train a more robust classifier. First, we utilize the pretrained CNN models to extract visual features from middle layer. Then, we use supervised learning method to train classifiers for each feature respectively. Finally, we use the late fusion strategy to combine the prediction of these classifiers. We evaluate the proposal with different classification methods under some several images benchmarks, and the results demonstrate that the proposed method can improve the performance effectively. Image classification Convolutional neural network Late fusion Zhang, Rongjie aut Meng, Zhijun (orcid)0000-0003-3163-5888 aut Hong, Richang aut Liu, Guangcan aut Enthalten in World wide web Springer US, 1998 22(2018), 2 vom: 15. Juni, Seite 423-436 (DE-627)301184976 (DE-600)1485096-5 (DE-576)9301184974 1386-145X nnns volume:22 year:2018 number:2 day:15 month:06 pages:423-436 https://doi.org/10.1007/s11280-018-0600-3 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-BUB SSG-OLC-MAT SSG-OPC-BBI GBV_ILN_70 54.84$jWebmanagement VZ 475288947 (DE-625)475288947 06.74$jInformationssysteme VZ 106415212 (DE-625)106415212 AR 22 2018 2 15 06 423-436 |
allfieldsSound |
10.1007/s11280-018-0600-3 doi (DE-627)OLC2062250681 (DE-He213)s11280-018-0600-3-p DE-627 ger DE-627 rakwb eng 004 VZ 24,1 ssgn 54.84$jWebmanagement bkl 06.74$jInformationssysteme bkl Liu, Xueliang verfasserin aut On fusing the latent deep CNN feature for image classification 2018 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2018. corrected publication 2018 Abstract Image classification, which aims at assigning a semantic category to images, has been extensively studied during the past few years. More recently, convolution neural network arises and has achieved very promising achievement. Compared with traditional feature extraction techniques (e.g., SIFT, HOG, GIST), the convolutional neural network can extract features from image automatically and does not need hand designed features. However, how to further improve the classification algorithm is still challenging in academic research. The latest research on CNN shows that the features extracted from middle layers is representative, which shows a possible way to improve the classification accuracy. Based on the observation, in this paper, we propose a method to fuse the latent features extracted from the middle layers in a CNN to train a more robust classifier. First, we utilize the pretrained CNN models to extract visual features from middle layer. Then, we use supervised learning method to train classifiers for each feature respectively. Finally, we use the late fusion strategy to combine the prediction of these classifiers. We evaluate the proposal with different classification methods under some several images benchmarks, and the results demonstrate that the proposed method can improve the performance effectively. Image classification Convolutional neural network Late fusion Zhang, Rongjie aut Meng, Zhijun (orcid)0000-0003-3163-5888 aut Hong, Richang aut Liu, Guangcan aut Enthalten in World wide web Springer US, 1998 22(2018), 2 vom: 15. Juni, Seite 423-436 (DE-627)301184976 (DE-600)1485096-5 (DE-576)9301184974 1386-145X nnns volume:22 year:2018 number:2 day:15 month:06 pages:423-436 https://doi.org/10.1007/s11280-018-0600-3 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-BUB SSG-OLC-MAT SSG-OPC-BBI GBV_ILN_70 54.84$jWebmanagement VZ 475288947 (DE-625)475288947 06.74$jInformationssysteme VZ 106415212 (DE-625)106415212 AR 22 2018 2 15 06 423-436 |
language |
English |
source |
Enthalten in World wide web 22(2018), 2 vom: 15. Juni, Seite 423-436 volume:22 year:2018 number:2 day:15 month:06 pages:423-436 |
sourceStr |
Enthalten in World wide web 22(2018), 2 vom: 15. Juni, Seite 423-436 volume:22 year:2018 number:2 day:15 month:06 pages:423-436 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Image classification Convolutional neural network Late fusion |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
World wide web |
authorswithroles_txt_mv |
Liu, Xueliang @@aut@@ Zhang, Rongjie @@aut@@ Meng, Zhijun @@aut@@ Hong, Richang @@aut@@ Liu, Guangcan @@aut@@ |
publishDateDaySort_date |
2018-06-15T00:00:00Z |
hierarchy_top_id |
301184976 |
dewey-sort |
14 |
id |
OLC2062250681 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2062250681</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230504080411.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200819s2018 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11280-018-0600-3</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2062250681</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11280-018-0600-3-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">24,1</subfield><subfield code="2">ssgn</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.84$jWebmanagement</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">06.74$jInformationssysteme</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Liu, Xueliang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">On fusing the latent deep CNN feature for image classification</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2018</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media, LLC, part of Springer Nature 2018. corrected publication 2018</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Image classification, which aims at assigning a semantic category to images, has been extensively studied during the past few years. More recently, convolution neural network arises and has achieved very promising achievement. Compared with traditional feature extraction techniques (e.g., SIFT, HOG, GIST), the convolutional neural network can extract features from image automatically and does not need hand designed features. However, how to further improve the classification algorithm is still challenging in academic research. The latest research on CNN shows that the features extracted from middle layers is representative, which shows a possible way to improve the classification accuracy. Based on the observation, in this paper, we propose a method to fuse the latent features extracted from the middle layers in a CNN to train a more robust classifier. First, we utilize the pretrained CNN models to extract visual features from middle layer. Then, we use supervised learning method to train classifiers for each feature respectively. Finally, we use the late fusion strategy to combine the prediction of these classifiers. We evaluate the proposal with different classification methods under some several images benchmarks, and the results demonstrate that the proposed method can improve the performance effectively.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Image classification</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Convolutional neural network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Late fusion</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhang, Rongjie</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Meng, Zhijun</subfield><subfield code="0">(orcid)0000-0003-3163-5888</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Hong, Richang</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Liu, Guangcan</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">World wide web</subfield><subfield code="d">Springer US, 1998</subfield><subfield code="g">22(2018), 2 vom: 15. Juni, Seite 423-436</subfield><subfield code="w">(DE-627)301184976</subfield><subfield code="w">(DE-600)1485096-5</subfield><subfield code="w">(DE-576)9301184974</subfield><subfield code="x">1386-145X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:22</subfield><subfield code="g">year:2018</subfield><subfield code="g">number:2</subfield><subfield code="g">day:15</subfield><subfield code="g">month:06</subfield><subfield code="g">pages:423-436</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11280-018-0600-3</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-BUB</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OPC-BBI</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.84$jWebmanagement</subfield><subfield code="q">VZ</subfield><subfield code="0">475288947</subfield><subfield code="0">(DE-625)475288947</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">06.74$jInformationssysteme</subfield><subfield code="q">VZ</subfield><subfield code="0">106415212</subfield><subfield code="0">(DE-625)106415212</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">22</subfield><subfield code="j">2018</subfield><subfield code="e">2</subfield><subfield code="b">15</subfield><subfield code="c">06</subfield><subfield code="h">423-436</subfield></datafield></record></collection>
|
author |
Liu, Xueliang |
spellingShingle |
Liu, Xueliang ddc 004 ssgn 24,1 bkl 54.84$jWebmanagement bkl 06.74$jInformationssysteme misc Image classification misc Convolutional neural network misc Late fusion On fusing the latent deep CNN feature for image classification |
authorStr |
Liu, Xueliang |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)301184976 |
format |
Article |
dewey-ones |
004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
1386-145X |
topic_title |
004 VZ 24,1 ssgn 54.84$jWebmanagement bkl 06.74$jInformationssysteme bkl On fusing the latent deep CNN feature for image classification Image classification Convolutional neural network Late fusion |
topic |
ddc 004 ssgn 24,1 bkl 54.84$jWebmanagement bkl 06.74$jInformationssysteme misc Image classification misc Convolutional neural network misc Late fusion |
topic_unstemmed |
ddc 004 ssgn 24,1 bkl 54.84$jWebmanagement bkl 06.74$jInformationssysteme misc Image classification misc Convolutional neural network misc Late fusion |
topic_browse |
ddc 004 ssgn 24,1 bkl 54.84$jWebmanagement bkl 06.74$jInformationssysteme misc Image classification misc Convolutional neural network misc Late fusion |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
World wide web |
hierarchy_parent_id |
301184976 |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
World wide web |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)301184976 (DE-600)1485096-5 (DE-576)9301184974 |
title |
On fusing the latent deep CNN feature for image classification |
ctrlnum |
(DE-627)OLC2062250681 (DE-He213)s11280-018-0600-3-p |
title_full |
On fusing the latent deep CNN feature for image classification |
author_sort |
Liu, Xueliang |
journal |
World wide web |
journalStr |
World wide web |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2018 |
contenttype_str_mv |
txt |
container_start_page |
423 |
author_browse |
Liu, Xueliang Zhang, Rongjie Meng, Zhijun Hong, Richang Liu, Guangcan |
container_volume |
22 |
class |
004 VZ 24,1 ssgn 54.84$jWebmanagement bkl 06.74$jInformationssysteme bkl |
format_se |
Aufsätze |
author-letter |
Liu, Xueliang |
doi_str_mv |
10.1007/s11280-018-0600-3 |
normlink |
(ORCID)0000-0003-3163-5888 475288947 106415212 |
normlink_prefix_str_mv |
(orcid)0000-0003-3163-5888 475288947 (DE-625)475288947 106415212 (DE-625)106415212 |
dewey-full |
004 |
title_sort |
on fusing the latent deep cnn feature for image classification |
title_auth |
On fusing the latent deep CNN feature for image classification |
abstract |
Abstract Image classification, which aims at assigning a semantic category to images, has been extensively studied during the past few years. More recently, convolution neural network arises and has achieved very promising achievement. Compared with traditional feature extraction techniques (e.g., SIFT, HOG, GIST), the convolutional neural network can extract features from image automatically and does not need hand designed features. However, how to further improve the classification algorithm is still challenging in academic research. The latest research on CNN shows that the features extracted from middle layers is representative, which shows a possible way to improve the classification accuracy. Based on the observation, in this paper, we propose a method to fuse the latent features extracted from the middle layers in a CNN to train a more robust classifier. First, we utilize the pretrained CNN models to extract visual features from middle layer. Then, we use supervised learning method to train classifiers for each feature respectively. Finally, we use the late fusion strategy to combine the prediction of these classifiers. We evaluate the proposal with different classification methods under some several images benchmarks, and the results demonstrate that the proposed method can improve the performance effectively. © Springer Science+Business Media, LLC, part of Springer Nature 2018. corrected publication 2018 |
abstractGer |
Abstract Image classification, which aims at assigning a semantic category to images, has been extensively studied during the past few years. More recently, convolution neural network arises and has achieved very promising achievement. Compared with traditional feature extraction techniques (e.g., SIFT, HOG, GIST), the convolutional neural network can extract features from image automatically and does not need hand designed features. However, how to further improve the classification algorithm is still challenging in academic research. The latest research on CNN shows that the features extracted from middle layers is representative, which shows a possible way to improve the classification accuracy. Based on the observation, in this paper, we propose a method to fuse the latent features extracted from the middle layers in a CNN to train a more robust classifier. First, we utilize the pretrained CNN models to extract visual features from middle layer. Then, we use supervised learning method to train classifiers for each feature respectively. Finally, we use the late fusion strategy to combine the prediction of these classifiers. We evaluate the proposal with different classification methods under some several images benchmarks, and the results demonstrate that the proposed method can improve the performance effectively. © Springer Science+Business Media, LLC, part of Springer Nature 2018. corrected publication 2018 |
abstract_unstemmed |
Abstract Image classification, which aims at assigning a semantic category to images, has been extensively studied during the past few years. More recently, convolution neural network arises and has achieved very promising achievement. Compared with traditional feature extraction techniques (e.g., SIFT, HOG, GIST), the convolutional neural network can extract features from image automatically and does not need hand designed features. However, how to further improve the classification algorithm is still challenging in academic research. The latest research on CNN shows that the features extracted from middle layers is representative, which shows a possible way to improve the classification accuracy. Based on the observation, in this paper, we propose a method to fuse the latent features extracted from the middle layers in a CNN to train a more robust classifier. First, we utilize the pretrained CNN models to extract visual features from middle layer. Then, we use supervised learning method to train classifiers for each feature respectively. Finally, we use the late fusion strategy to combine the prediction of these classifiers. We evaluate the proposal with different classification methods under some several images benchmarks, and the results demonstrate that the proposed method can improve the performance effectively. © Springer Science+Business Media, LLC, part of Springer Nature 2018. corrected publication 2018 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-BUB SSG-OLC-MAT SSG-OPC-BBI GBV_ILN_70 |
container_issue |
2 |
title_short |
On fusing the latent deep CNN feature for image classification |
url |
https://doi.org/10.1007/s11280-018-0600-3 |
remote_bool |
false |
author2 |
Zhang, Rongjie Meng, Zhijun Hong, Richang Liu, Guangcan |
author2Str |
Zhang, Rongjie Meng, Zhijun Hong, Richang Liu, Guangcan |
ppnlink |
301184976 |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s11280-018-0600-3 |
up_date |
2024-07-03T14:21:40.361Z |
_version_ |
1803568021879390210 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2062250681</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230504080411.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200819s2018 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11280-018-0600-3</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2062250681</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11280-018-0600-3-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">24,1</subfield><subfield code="2">ssgn</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.84$jWebmanagement</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">06.74$jInformationssysteme</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Liu, Xueliang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">On fusing the latent deep CNN feature for image classification</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2018</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media, LLC, part of Springer Nature 2018. corrected publication 2018</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Image classification, which aims at assigning a semantic category to images, has been extensively studied during the past few years. More recently, convolution neural network arises and has achieved very promising achievement. Compared with traditional feature extraction techniques (e.g., SIFT, HOG, GIST), the convolutional neural network can extract features from image automatically and does not need hand designed features. However, how to further improve the classification algorithm is still challenging in academic research. The latest research on CNN shows that the features extracted from middle layers is representative, which shows a possible way to improve the classification accuracy. Based on the observation, in this paper, we propose a method to fuse the latent features extracted from the middle layers in a CNN to train a more robust classifier. First, we utilize the pretrained CNN models to extract visual features from middle layer. Then, we use supervised learning method to train classifiers for each feature respectively. Finally, we use the late fusion strategy to combine the prediction of these classifiers. We evaluate the proposal with different classification methods under some several images benchmarks, and the results demonstrate that the proposed method can improve the performance effectively.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Image classification</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Convolutional neural network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Late fusion</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhang, Rongjie</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Meng, Zhijun</subfield><subfield code="0">(orcid)0000-0003-3163-5888</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Hong, Richang</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Liu, Guangcan</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">World wide web</subfield><subfield code="d">Springer US, 1998</subfield><subfield code="g">22(2018), 2 vom: 15. Juni, Seite 423-436</subfield><subfield code="w">(DE-627)301184976</subfield><subfield code="w">(DE-600)1485096-5</subfield><subfield code="w">(DE-576)9301184974</subfield><subfield code="x">1386-145X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:22</subfield><subfield code="g">year:2018</subfield><subfield code="g">number:2</subfield><subfield code="g">day:15</subfield><subfield code="g">month:06</subfield><subfield code="g">pages:423-436</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11280-018-0600-3</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-BUB</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OPC-BBI</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.84$jWebmanagement</subfield><subfield code="q">VZ</subfield><subfield code="0">475288947</subfield><subfield code="0">(DE-625)475288947</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">06.74$jInformationssysteme</subfield><subfield code="q">VZ</subfield><subfield code="0">106415212</subfield><subfield code="0">(DE-625)106415212</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">22</subfield><subfield code="j">2018</subfield><subfield code="e">2</subfield><subfield code="b">15</subfield><subfield code="c">06</subfield><subfield code="h">423-436</subfield></datafield></record></collection>
|
score |
7.4010115 |