Online deep learning based on auto-encoder
Abstract Online learning is an important technical means for sketching massive real-time and high-speed data. Although this direction has attracted intensive attention, most of the literature in this area ignore the following three issues: (1) they think little of the underlying abstract hierarchica...
Ausführliche Beschreibung
Autor*in: |
Zhang, Si-si [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2021 |
---|
Schlagwörter: |
---|
Anmerkung: |
© Springer Science+Business Media, LLC, part of Springer Nature 2021 |
---|
Übergeordnetes Werk: |
Enthalten in: Applied intelligence - Springer US, 1991, 51(2021), 8 vom: 09. Jan., Seite 5420-5439 |
---|---|
Übergeordnetes Werk: |
volume:51 ; year:2021 ; number:8 ; day:09 ; month:01 ; pages:5420-5439 |
Links: |
---|
DOI / URN: |
10.1007/s10489-020-02058-8 |
---|
Katalog-ID: |
OLC212654530X |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | OLC212654530X | ||
003 | DE-627 | ||
005 | 20230505115620.0 | ||
007 | tu | ||
008 | 230505s2021 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s10489-020-02058-8 |2 doi | |
035 | |a (DE-627)OLC212654530X | ||
035 | |a (DE-He213)s10489-020-02058-8-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |q VZ |
100 | 1 | |a Zhang, Si-si |e verfasserin |4 aut | |
245 | 1 | 0 | |a Online deep learning based on auto-encoder |
264 | 1 | |c 2021 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © Springer Science+Business Media, LLC, part of Springer Nature 2021 | ||
520 | |a Abstract Online learning is an important technical means for sketching massive real-time and high-speed data. Although this direction has attracted intensive attention, most of the literature in this area ignore the following three issues: (1) they think little of the underlying abstract hierarchical latent information existing in examples, even if extracting these abstract hierarchical latent representations is useful to better predict the class labels of examples; (2) the idea of preassigned model on unseen datapoints is not suitable for modeling streaming data with evolving probability distribution. This challenge is referred as “model flexibility”. And so, with this in minds, the online deep learning model we need to design should have a variable underlying structure; (3) moreover, it is of utmost importance to fusion these abstract hierarchical latent representations to achieve better classification performance, and we should give different weights to different levels of implicit representation information when dealing with the data streaming where the data distribution changes. To address these issues, we propose a two-phase Online Deep Learning based on Auto-Encoder (ODLAE). Based on auto-encoder, considering reconstruction loss, we extract abstract hierarchical latent representations of instances; Based on predictive loss, we devise two fusion strategies: the output-level fusion strategy, which is obtained by fusing the classification results of encoder’s each hidden layer; and feature-level fusion strategy, which is leveraged self-attention mechanism to fusion the every hidden layer’s output. Finally, in order to improve the robustness of the algorithm, we also try to utilize the denoising auto-encoder to yield hierarchical latent representations. Experimental results on different datasets are presented to verify the validity of our proposed algorithm (ODLAE) outperforms several baselines. | ||
650 | 4 | |a Online deep learning | |
650 | 4 | |a Auto-encoder | |
650 | 4 | |a Output-level fusion | |
650 | 4 | |a Feature-level fusion | |
650 | 4 | |a Denoising auto-encoder | |
700 | 1 | |a Liu, Jian-wei |4 aut | |
700 | 1 | |a Zuo, Xin |4 aut | |
700 | 1 | |a Lu, Run-kun |4 aut | |
700 | 1 | |a Lian, Si-ming |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Applied intelligence |d Springer US, 1991 |g 51(2021), 8 vom: 09. Jan., Seite 5420-5439 |w (DE-627)130990515 |w (DE-600)1080229-0 |w (DE-576)029154286 |x 0924-669X |7 nnns |
773 | 1 | 8 | |g volume:51 |g year:2021 |g number:8 |g day:09 |g month:01 |g pages:5420-5439 |
856 | 4 | 1 | |u https://doi.org/10.1007/s10489-020-02058-8 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-MAT | ||
951 | |a AR | ||
952 | |d 51 |j 2021 |e 8 |b 09 |c 01 |h 5420-5439 |
author_variant |
s s z ssz j w l jwl x z xz r k l rkl s m l sml |
---|---|
matchkey_str |
article:0924669X:2021----::nieeperigaeoa |
hierarchy_sort_str |
2021 |
publishDate |
2021 |
allfields |
10.1007/s10489-020-02058-8 doi (DE-627)OLC212654530X (DE-He213)s10489-020-02058-8-p DE-627 ger DE-627 rakwb eng 004 VZ Zhang, Si-si verfasserin aut Online deep learning based on auto-encoder 2021 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2021 Abstract Online learning is an important technical means for sketching massive real-time and high-speed data. Although this direction has attracted intensive attention, most of the literature in this area ignore the following three issues: (1) they think little of the underlying abstract hierarchical latent information existing in examples, even if extracting these abstract hierarchical latent representations is useful to better predict the class labels of examples; (2) the idea of preassigned model on unseen datapoints is not suitable for modeling streaming data with evolving probability distribution. This challenge is referred as “model flexibility”. And so, with this in minds, the online deep learning model we need to design should have a variable underlying structure; (3) moreover, it is of utmost importance to fusion these abstract hierarchical latent representations to achieve better classification performance, and we should give different weights to different levels of implicit representation information when dealing with the data streaming where the data distribution changes. To address these issues, we propose a two-phase Online Deep Learning based on Auto-Encoder (ODLAE). Based on auto-encoder, considering reconstruction loss, we extract abstract hierarchical latent representations of instances; Based on predictive loss, we devise two fusion strategies: the output-level fusion strategy, which is obtained by fusing the classification results of encoder’s each hidden layer; and feature-level fusion strategy, which is leveraged self-attention mechanism to fusion the every hidden layer’s output. Finally, in order to improve the robustness of the algorithm, we also try to utilize the denoising auto-encoder to yield hierarchical latent representations. Experimental results on different datasets are presented to verify the validity of our proposed algorithm (ODLAE) outperforms several baselines. Online deep learning Auto-encoder Output-level fusion Feature-level fusion Denoising auto-encoder Liu, Jian-wei aut Zuo, Xin aut Lu, Run-kun aut Lian, Si-ming aut Enthalten in Applied intelligence Springer US, 1991 51(2021), 8 vom: 09. Jan., Seite 5420-5439 (DE-627)130990515 (DE-600)1080229-0 (DE-576)029154286 0924-669X nnns volume:51 year:2021 number:8 day:09 month:01 pages:5420-5439 https://doi.org/10.1007/s10489-020-02058-8 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 51 2021 8 09 01 5420-5439 |
spelling |
10.1007/s10489-020-02058-8 doi (DE-627)OLC212654530X (DE-He213)s10489-020-02058-8-p DE-627 ger DE-627 rakwb eng 004 VZ Zhang, Si-si verfasserin aut Online deep learning based on auto-encoder 2021 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2021 Abstract Online learning is an important technical means for sketching massive real-time and high-speed data. Although this direction has attracted intensive attention, most of the literature in this area ignore the following three issues: (1) they think little of the underlying abstract hierarchical latent information existing in examples, even if extracting these abstract hierarchical latent representations is useful to better predict the class labels of examples; (2) the idea of preassigned model on unseen datapoints is not suitable for modeling streaming data with evolving probability distribution. This challenge is referred as “model flexibility”. And so, with this in minds, the online deep learning model we need to design should have a variable underlying structure; (3) moreover, it is of utmost importance to fusion these abstract hierarchical latent representations to achieve better classification performance, and we should give different weights to different levels of implicit representation information when dealing with the data streaming where the data distribution changes. To address these issues, we propose a two-phase Online Deep Learning based on Auto-Encoder (ODLAE). Based on auto-encoder, considering reconstruction loss, we extract abstract hierarchical latent representations of instances; Based on predictive loss, we devise two fusion strategies: the output-level fusion strategy, which is obtained by fusing the classification results of encoder’s each hidden layer; and feature-level fusion strategy, which is leveraged self-attention mechanism to fusion the every hidden layer’s output. Finally, in order to improve the robustness of the algorithm, we also try to utilize the denoising auto-encoder to yield hierarchical latent representations. Experimental results on different datasets are presented to verify the validity of our proposed algorithm (ODLAE) outperforms several baselines. Online deep learning Auto-encoder Output-level fusion Feature-level fusion Denoising auto-encoder Liu, Jian-wei aut Zuo, Xin aut Lu, Run-kun aut Lian, Si-ming aut Enthalten in Applied intelligence Springer US, 1991 51(2021), 8 vom: 09. Jan., Seite 5420-5439 (DE-627)130990515 (DE-600)1080229-0 (DE-576)029154286 0924-669X nnns volume:51 year:2021 number:8 day:09 month:01 pages:5420-5439 https://doi.org/10.1007/s10489-020-02058-8 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 51 2021 8 09 01 5420-5439 |
allfields_unstemmed |
10.1007/s10489-020-02058-8 doi (DE-627)OLC212654530X (DE-He213)s10489-020-02058-8-p DE-627 ger DE-627 rakwb eng 004 VZ Zhang, Si-si verfasserin aut Online deep learning based on auto-encoder 2021 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2021 Abstract Online learning is an important technical means for sketching massive real-time and high-speed data. Although this direction has attracted intensive attention, most of the literature in this area ignore the following three issues: (1) they think little of the underlying abstract hierarchical latent information existing in examples, even if extracting these abstract hierarchical latent representations is useful to better predict the class labels of examples; (2) the idea of preassigned model on unseen datapoints is not suitable for modeling streaming data with evolving probability distribution. This challenge is referred as “model flexibility”. And so, with this in minds, the online deep learning model we need to design should have a variable underlying structure; (3) moreover, it is of utmost importance to fusion these abstract hierarchical latent representations to achieve better classification performance, and we should give different weights to different levels of implicit representation information when dealing with the data streaming where the data distribution changes. To address these issues, we propose a two-phase Online Deep Learning based on Auto-Encoder (ODLAE). Based on auto-encoder, considering reconstruction loss, we extract abstract hierarchical latent representations of instances; Based on predictive loss, we devise two fusion strategies: the output-level fusion strategy, which is obtained by fusing the classification results of encoder’s each hidden layer; and feature-level fusion strategy, which is leveraged self-attention mechanism to fusion the every hidden layer’s output. Finally, in order to improve the robustness of the algorithm, we also try to utilize the denoising auto-encoder to yield hierarchical latent representations. Experimental results on different datasets are presented to verify the validity of our proposed algorithm (ODLAE) outperforms several baselines. Online deep learning Auto-encoder Output-level fusion Feature-level fusion Denoising auto-encoder Liu, Jian-wei aut Zuo, Xin aut Lu, Run-kun aut Lian, Si-ming aut Enthalten in Applied intelligence Springer US, 1991 51(2021), 8 vom: 09. Jan., Seite 5420-5439 (DE-627)130990515 (DE-600)1080229-0 (DE-576)029154286 0924-669X nnns volume:51 year:2021 number:8 day:09 month:01 pages:5420-5439 https://doi.org/10.1007/s10489-020-02058-8 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 51 2021 8 09 01 5420-5439 |
allfieldsGer |
10.1007/s10489-020-02058-8 doi (DE-627)OLC212654530X (DE-He213)s10489-020-02058-8-p DE-627 ger DE-627 rakwb eng 004 VZ Zhang, Si-si verfasserin aut Online deep learning based on auto-encoder 2021 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2021 Abstract Online learning is an important technical means for sketching massive real-time and high-speed data. Although this direction has attracted intensive attention, most of the literature in this area ignore the following three issues: (1) they think little of the underlying abstract hierarchical latent information existing in examples, even if extracting these abstract hierarchical latent representations is useful to better predict the class labels of examples; (2) the idea of preassigned model on unseen datapoints is not suitable for modeling streaming data with evolving probability distribution. This challenge is referred as “model flexibility”. And so, with this in minds, the online deep learning model we need to design should have a variable underlying structure; (3) moreover, it is of utmost importance to fusion these abstract hierarchical latent representations to achieve better classification performance, and we should give different weights to different levels of implicit representation information when dealing with the data streaming where the data distribution changes. To address these issues, we propose a two-phase Online Deep Learning based on Auto-Encoder (ODLAE). Based on auto-encoder, considering reconstruction loss, we extract abstract hierarchical latent representations of instances; Based on predictive loss, we devise two fusion strategies: the output-level fusion strategy, which is obtained by fusing the classification results of encoder’s each hidden layer; and feature-level fusion strategy, which is leveraged self-attention mechanism to fusion the every hidden layer’s output. Finally, in order to improve the robustness of the algorithm, we also try to utilize the denoising auto-encoder to yield hierarchical latent representations. Experimental results on different datasets are presented to verify the validity of our proposed algorithm (ODLAE) outperforms several baselines. Online deep learning Auto-encoder Output-level fusion Feature-level fusion Denoising auto-encoder Liu, Jian-wei aut Zuo, Xin aut Lu, Run-kun aut Lian, Si-ming aut Enthalten in Applied intelligence Springer US, 1991 51(2021), 8 vom: 09. Jan., Seite 5420-5439 (DE-627)130990515 (DE-600)1080229-0 (DE-576)029154286 0924-669X nnns volume:51 year:2021 number:8 day:09 month:01 pages:5420-5439 https://doi.org/10.1007/s10489-020-02058-8 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 51 2021 8 09 01 5420-5439 |
allfieldsSound |
10.1007/s10489-020-02058-8 doi (DE-627)OLC212654530X (DE-He213)s10489-020-02058-8-p DE-627 ger DE-627 rakwb eng 004 VZ Zhang, Si-si verfasserin aut Online deep learning based on auto-encoder 2021 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2021 Abstract Online learning is an important technical means for sketching massive real-time and high-speed data. Although this direction has attracted intensive attention, most of the literature in this area ignore the following three issues: (1) they think little of the underlying abstract hierarchical latent information existing in examples, even if extracting these abstract hierarchical latent representations is useful to better predict the class labels of examples; (2) the idea of preassigned model on unseen datapoints is not suitable for modeling streaming data with evolving probability distribution. This challenge is referred as “model flexibility”. And so, with this in minds, the online deep learning model we need to design should have a variable underlying structure; (3) moreover, it is of utmost importance to fusion these abstract hierarchical latent representations to achieve better classification performance, and we should give different weights to different levels of implicit representation information when dealing with the data streaming where the data distribution changes. To address these issues, we propose a two-phase Online Deep Learning based on Auto-Encoder (ODLAE). Based on auto-encoder, considering reconstruction loss, we extract abstract hierarchical latent representations of instances; Based on predictive loss, we devise two fusion strategies: the output-level fusion strategy, which is obtained by fusing the classification results of encoder’s each hidden layer; and feature-level fusion strategy, which is leveraged self-attention mechanism to fusion the every hidden layer’s output. Finally, in order to improve the robustness of the algorithm, we also try to utilize the denoising auto-encoder to yield hierarchical latent representations. Experimental results on different datasets are presented to verify the validity of our proposed algorithm (ODLAE) outperforms several baselines. Online deep learning Auto-encoder Output-level fusion Feature-level fusion Denoising auto-encoder Liu, Jian-wei aut Zuo, Xin aut Lu, Run-kun aut Lian, Si-ming aut Enthalten in Applied intelligence Springer US, 1991 51(2021), 8 vom: 09. Jan., Seite 5420-5439 (DE-627)130990515 (DE-600)1080229-0 (DE-576)029154286 0924-669X nnns volume:51 year:2021 number:8 day:09 month:01 pages:5420-5439 https://doi.org/10.1007/s10489-020-02058-8 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 51 2021 8 09 01 5420-5439 |
language |
English |
source |
Enthalten in Applied intelligence 51(2021), 8 vom: 09. Jan., Seite 5420-5439 volume:51 year:2021 number:8 day:09 month:01 pages:5420-5439 |
sourceStr |
Enthalten in Applied intelligence 51(2021), 8 vom: 09. Jan., Seite 5420-5439 volume:51 year:2021 number:8 day:09 month:01 pages:5420-5439 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Online deep learning Auto-encoder Output-level fusion Feature-level fusion Denoising auto-encoder |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
Applied intelligence |
authorswithroles_txt_mv |
Zhang, Si-si @@aut@@ Liu, Jian-wei @@aut@@ Zuo, Xin @@aut@@ Lu, Run-kun @@aut@@ Lian, Si-ming @@aut@@ |
publishDateDaySort_date |
2021-01-09T00:00:00Z |
hierarchy_top_id |
130990515 |
dewey-sort |
14 |
id |
OLC212654530X |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">OLC212654530X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230505115620.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">230505s2021 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s10489-020-02058-8</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC212654530X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s10489-020-02058-8-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Zhang, Si-si</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Online deep learning based on auto-encoder</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2021</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media, LLC, part of Springer Nature 2021</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Online learning is an important technical means for sketching massive real-time and high-speed data. Although this direction has attracted intensive attention, most of the literature in this area ignore the following three issues: (1) they think little of the underlying abstract hierarchical latent information existing in examples, even if extracting these abstract hierarchical latent representations is useful to better predict the class labels of examples; (2) the idea of preassigned model on unseen datapoints is not suitable for modeling streaming data with evolving probability distribution. This challenge is referred as “model flexibility”. And so, with this in minds, the online deep learning model we need to design should have a variable underlying structure; (3) moreover, it is of utmost importance to fusion these abstract hierarchical latent representations to achieve better classification performance, and we should give different weights to different levels of implicit representation information when dealing with the data streaming where the data distribution changes. To address these issues, we propose a two-phase Online Deep Learning based on Auto-Encoder (ODLAE). Based on auto-encoder, considering reconstruction loss, we extract abstract hierarchical latent representations of instances; Based on predictive loss, we devise two fusion strategies: the output-level fusion strategy, which is obtained by fusing the classification results of encoder’s each hidden layer; and feature-level fusion strategy, which is leveraged self-attention mechanism to fusion the every hidden layer’s output. Finally, in order to improve the robustness of the algorithm, we also try to utilize the denoising auto-encoder to yield hierarchical latent representations. Experimental results on different datasets are presented to verify the validity of our proposed algorithm (ODLAE) outperforms several baselines.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Online deep learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Auto-encoder</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Output-level fusion</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Feature-level fusion</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Denoising auto-encoder</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Liu, Jian-wei</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zuo, Xin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lu, Run-kun</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lian, Si-ming</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Applied intelligence</subfield><subfield code="d">Springer US, 1991</subfield><subfield code="g">51(2021), 8 vom: 09. Jan., Seite 5420-5439</subfield><subfield code="w">(DE-627)130990515</subfield><subfield code="w">(DE-600)1080229-0</subfield><subfield code="w">(DE-576)029154286</subfield><subfield code="x">0924-669X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:51</subfield><subfield code="g">year:2021</subfield><subfield code="g">number:8</subfield><subfield code="g">day:09</subfield><subfield code="g">month:01</subfield><subfield code="g">pages:5420-5439</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s10489-020-02058-8</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">51</subfield><subfield code="j">2021</subfield><subfield code="e">8</subfield><subfield code="b">09</subfield><subfield code="c">01</subfield><subfield code="h">5420-5439</subfield></datafield></record></collection>
|
author |
Zhang, Si-si |
spellingShingle |
Zhang, Si-si ddc 004 misc Online deep learning misc Auto-encoder misc Output-level fusion misc Feature-level fusion misc Denoising auto-encoder Online deep learning based on auto-encoder |
authorStr |
Zhang, Si-si |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)130990515 |
format |
Article |
dewey-ones |
004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
0924-669X |
topic_title |
004 VZ Online deep learning based on auto-encoder Online deep learning Auto-encoder Output-level fusion Feature-level fusion Denoising auto-encoder |
topic |
ddc 004 misc Online deep learning misc Auto-encoder misc Output-level fusion misc Feature-level fusion misc Denoising auto-encoder |
topic_unstemmed |
ddc 004 misc Online deep learning misc Auto-encoder misc Output-level fusion misc Feature-level fusion misc Denoising auto-encoder |
topic_browse |
ddc 004 misc Online deep learning misc Auto-encoder misc Output-level fusion misc Feature-level fusion misc Denoising auto-encoder |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
Applied intelligence |
hierarchy_parent_id |
130990515 |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
Applied intelligence |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)130990515 (DE-600)1080229-0 (DE-576)029154286 |
title |
Online deep learning based on auto-encoder |
ctrlnum |
(DE-627)OLC212654530X (DE-He213)s10489-020-02058-8-p |
title_full |
Online deep learning based on auto-encoder |
author_sort |
Zhang, Si-si |
journal |
Applied intelligence |
journalStr |
Applied intelligence |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2021 |
contenttype_str_mv |
txt |
container_start_page |
5420 |
author_browse |
Zhang, Si-si Liu, Jian-wei Zuo, Xin Lu, Run-kun Lian, Si-ming |
container_volume |
51 |
class |
004 VZ |
format_se |
Aufsätze |
author-letter |
Zhang, Si-si |
doi_str_mv |
10.1007/s10489-020-02058-8 |
dewey-full |
004 |
title_sort |
online deep learning based on auto-encoder |
title_auth |
Online deep learning based on auto-encoder |
abstract |
Abstract Online learning is an important technical means for sketching massive real-time and high-speed data. Although this direction has attracted intensive attention, most of the literature in this area ignore the following three issues: (1) they think little of the underlying abstract hierarchical latent information existing in examples, even if extracting these abstract hierarchical latent representations is useful to better predict the class labels of examples; (2) the idea of preassigned model on unseen datapoints is not suitable for modeling streaming data with evolving probability distribution. This challenge is referred as “model flexibility”. And so, with this in minds, the online deep learning model we need to design should have a variable underlying structure; (3) moreover, it is of utmost importance to fusion these abstract hierarchical latent representations to achieve better classification performance, and we should give different weights to different levels of implicit representation information when dealing with the data streaming where the data distribution changes. To address these issues, we propose a two-phase Online Deep Learning based on Auto-Encoder (ODLAE). Based on auto-encoder, considering reconstruction loss, we extract abstract hierarchical latent representations of instances; Based on predictive loss, we devise two fusion strategies: the output-level fusion strategy, which is obtained by fusing the classification results of encoder’s each hidden layer; and feature-level fusion strategy, which is leveraged self-attention mechanism to fusion the every hidden layer’s output. Finally, in order to improve the robustness of the algorithm, we also try to utilize the denoising auto-encoder to yield hierarchical latent representations. Experimental results on different datasets are presented to verify the validity of our proposed algorithm (ODLAE) outperforms several baselines. © Springer Science+Business Media, LLC, part of Springer Nature 2021 |
abstractGer |
Abstract Online learning is an important technical means for sketching massive real-time and high-speed data. Although this direction has attracted intensive attention, most of the literature in this area ignore the following three issues: (1) they think little of the underlying abstract hierarchical latent information existing in examples, even if extracting these abstract hierarchical latent representations is useful to better predict the class labels of examples; (2) the idea of preassigned model on unseen datapoints is not suitable for modeling streaming data with evolving probability distribution. This challenge is referred as “model flexibility”. And so, with this in minds, the online deep learning model we need to design should have a variable underlying structure; (3) moreover, it is of utmost importance to fusion these abstract hierarchical latent representations to achieve better classification performance, and we should give different weights to different levels of implicit representation information when dealing with the data streaming where the data distribution changes. To address these issues, we propose a two-phase Online Deep Learning based on Auto-Encoder (ODLAE). Based on auto-encoder, considering reconstruction loss, we extract abstract hierarchical latent representations of instances; Based on predictive loss, we devise two fusion strategies: the output-level fusion strategy, which is obtained by fusing the classification results of encoder’s each hidden layer; and feature-level fusion strategy, which is leveraged self-attention mechanism to fusion the every hidden layer’s output. Finally, in order to improve the robustness of the algorithm, we also try to utilize the denoising auto-encoder to yield hierarchical latent representations. Experimental results on different datasets are presented to verify the validity of our proposed algorithm (ODLAE) outperforms several baselines. © Springer Science+Business Media, LLC, part of Springer Nature 2021 |
abstract_unstemmed |
Abstract Online learning is an important technical means for sketching massive real-time and high-speed data. Although this direction has attracted intensive attention, most of the literature in this area ignore the following three issues: (1) they think little of the underlying abstract hierarchical latent information existing in examples, even if extracting these abstract hierarchical latent representations is useful to better predict the class labels of examples; (2) the idea of preassigned model on unseen datapoints is not suitable for modeling streaming data with evolving probability distribution. This challenge is referred as “model flexibility”. And so, with this in minds, the online deep learning model we need to design should have a variable underlying structure; (3) moreover, it is of utmost importance to fusion these abstract hierarchical latent representations to achieve better classification performance, and we should give different weights to different levels of implicit representation information when dealing with the data streaming where the data distribution changes. To address these issues, we propose a two-phase Online Deep Learning based on Auto-Encoder (ODLAE). Based on auto-encoder, considering reconstruction loss, we extract abstract hierarchical latent representations of instances; Based on predictive loss, we devise two fusion strategies: the output-level fusion strategy, which is obtained by fusing the classification results of encoder’s each hidden layer; and feature-level fusion strategy, which is leveraged self-attention mechanism to fusion the every hidden layer’s output. Finally, in order to improve the robustness of the algorithm, we also try to utilize the denoising auto-encoder to yield hierarchical latent representations. Experimental results on different datasets are presented to verify the validity of our proposed algorithm (ODLAE) outperforms several baselines. © Springer Science+Business Media, LLC, part of Springer Nature 2021 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT |
container_issue |
8 |
title_short |
Online deep learning based on auto-encoder |
url |
https://doi.org/10.1007/s10489-020-02058-8 |
remote_bool |
false |
author2 |
Liu, Jian-wei Zuo, Xin Lu, Run-kun Lian, Si-ming |
author2Str |
Liu, Jian-wei Zuo, Xin Lu, Run-kun Lian, Si-ming |
ppnlink |
130990515 |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s10489-020-02058-8 |
up_date |
2024-07-04T07:23:13.975Z |
_version_ |
1803632292896178176 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">OLC212654530X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230505115620.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">230505s2021 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s10489-020-02058-8</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC212654530X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s10489-020-02058-8-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Zhang, Si-si</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Online deep learning based on auto-encoder</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2021</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media, LLC, part of Springer Nature 2021</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Online learning is an important technical means for sketching massive real-time and high-speed data. Although this direction has attracted intensive attention, most of the literature in this area ignore the following three issues: (1) they think little of the underlying abstract hierarchical latent information existing in examples, even if extracting these abstract hierarchical latent representations is useful to better predict the class labels of examples; (2) the idea of preassigned model on unseen datapoints is not suitable for modeling streaming data with evolving probability distribution. This challenge is referred as “model flexibility”. And so, with this in minds, the online deep learning model we need to design should have a variable underlying structure; (3) moreover, it is of utmost importance to fusion these abstract hierarchical latent representations to achieve better classification performance, and we should give different weights to different levels of implicit representation information when dealing with the data streaming where the data distribution changes. To address these issues, we propose a two-phase Online Deep Learning based on Auto-Encoder (ODLAE). Based on auto-encoder, considering reconstruction loss, we extract abstract hierarchical latent representations of instances; Based on predictive loss, we devise two fusion strategies: the output-level fusion strategy, which is obtained by fusing the classification results of encoder’s each hidden layer; and feature-level fusion strategy, which is leveraged self-attention mechanism to fusion the every hidden layer’s output. Finally, in order to improve the robustness of the algorithm, we also try to utilize the denoising auto-encoder to yield hierarchical latent representations. Experimental results on different datasets are presented to verify the validity of our proposed algorithm (ODLAE) outperforms several baselines.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Online deep learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Auto-encoder</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Output-level fusion</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Feature-level fusion</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Denoising auto-encoder</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Liu, Jian-wei</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zuo, Xin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lu, Run-kun</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lian, Si-ming</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Applied intelligence</subfield><subfield code="d">Springer US, 1991</subfield><subfield code="g">51(2021), 8 vom: 09. Jan., Seite 5420-5439</subfield><subfield code="w">(DE-627)130990515</subfield><subfield code="w">(DE-600)1080229-0</subfield><subfield code="w">(DE-576)029154286</subfield><subfield code="x">0924-669X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:51</subfield><subfield code="g">year:2021</subfield><subfield code="g">number:8</subfield><subfield code="g">day:09</subfield><subfield code="g">month:01</subfield><subfield code="g">pages:5420-5439</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s10489-020-02058-8</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">51</subfield><subfield code="j">2021</subfield><subfield code="e">8</subfield><subfield code="b">09</subfield><subfield code="c">01</subfield><subfield code="h">5420-5439</subfield></datafield></record></collection>
|
score |
7.4010057 |