Video hashing based on appearance and attention features fusion via DBN
Video hashing has attracted increasing attention in the field of large-scale video retrieval. However, only low-level features or their combinations, referred to as appearance features, are used to generate the video hash in most of the existing video hashing algorithms and these kinds of features a...
Ausführliche Beschreibung
Autor*in: |
Sun, Jiande [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2016transfer abstract |
---|
Schlagwörter: |
---|
Umfang: |
11 |
---|
Übergeordnetes Werk: |
Enthalten in: The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast - Liu, Yang ELSEVIER, 2018, an international journal, Amsterdam |
---|---|
Übergeordnetes Werk: |
volume:213 ; year:2016 ; day:12 ; month:11 ; pages:84-94 ; extent:11 |
Links: |
---|
DOI / URN: |
10.1016/j.neucom.2016.05.098 |
---|
Katalog-ID: |
ELV024602531 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV024602531 | ||
003 | DE-627 | ||
005 | 20230625143139.0 | ||
007 | cr uuu---uuuuu | ||
008 | 180603s2016 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.neucom.2016.05.098 |2 doi | |
028 | 5 | 2 | |a GBVA2016014000028.pica |
035 | |a (DE-627)ELV024602531 | ||
035 | |a (ELSEVIER)S0925-2312(16)30725-1 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | |a 610 | |
082 | 0 | 4 | |a 610 |q DE-600 |
082 | 0 | 4 | |a 570 |q VZ |
084 | |a BIODIV |q DE-30 |2 fid | ||
084 | |a 35.70 |2 bkl | ||
084 | |a 42.12 |2 bkl | ||
100 | 1 | |a Sun, Jiande |e verfasserin |4 aut | |
245 | 1 | 0 | |a Video hashing based on appearance and attention features fusion via DBN |
264 | 1 | |c 2016transfer abstract | |
300 | |a 11 | ||
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a nicht spezifiziert |b z |2 rdamedia | ||
338 | |a nicht spezifiziert |b zu |2 rdacarrier | ||
520 | |a Video hashing has attracted increasing attention in the field of large-scale video retrieval. However, only low-level features or their combinations, referred to as appearance features, are used to generate the video hash in most of the existing video hashing algorithms and these kinds of features are referred to as appearance features. In this paper, a visual attention model is used to extract visual attention features, and the video hash is generated from a fusion of visual-appearance and visual-attention features via a deep belief network (DBN) to obtain representative video features. In addition, hash distance is taken as a vector to measure the similarity between hashes. BER is used as the amplitude of hash distance and the vector cosine similarity is used as the angle of hash distance. Experimental results demonstrate that the fusion of visual appearance and attention features brings about better performance of video hash on recall and precision rates, and the angle of hash distance is useful to improve the accuracy of hash matching. | ||
520 | |a Video hashing has attracted increasing attention in the field of large-scale video retrieval. However, only low-level features or their combinations, referred to as appearance features, are used to generate the video hash in most of the existing video hashing algorithms and these kinds of features are referred to as appearance features. In this paper, a visual attention model is used to extract visual attention features, and the video hash is generated from a fusion of visual-appearance and visual-attention features via a deep belief network (DBN) to obtain representative video features. In addition, hash distance is taken as a vector to measure the similarity between hashes. BER is used as the amplitude of hash distance and the vector cosine similarity is used as the angle of hash distance. Experimental results demonstrate that the fusion of visual appearance and attention features brings about better performance of video hash on recall and precision rates, and the angle of hash distance is useful to improve the accuracy of hash matching. | ||
650 | 7 | |a Deep belief network (DBN) |2 Elsevier | |
650 | 7 | |a Video hashing |2 Elsevier | |
650 | 7 | |a Feature fusion |2 Elsevier | |
650 | 7 | |a Angle of hash distance |2 Elsevier | |
650 | 7 | |a Visual attention |2 Elsevier | |
700 | 1 | |a Liu, Xiaocui |4 oth | |
700 | 1 | |a Wan, Wenbo |4 oth | |
700 | 1 | |a Li, Jing |4 oth | |
700 | 1 | |a Zhao, Dong |4 oth | |
700 | 1 | |a Zhang, Huaxiang |4 oth | |
773 | 0 | 8 | |i Enthalten in |n Elsevier |a Liu, Yang ELSEVIER |t The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast |d 2018 |d an international journal |g Amsterdam |w (DE-627)ELV002603926 |
773 | 1 | 8 | |g volume:213 |g year:2016 |g day:12 |g month:11 |g pages:84-94 |g extent:11 |
856 | 4 | 0 | |u https://doi.org/10.1016/j.neucom.2016.05.098 |3 Volltext |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SYSFLAG_U | ||
912 | |a FID-BIODIV | ||
912 | |a SSG-OLC-PHA | ||
936 | b | k | |a 35.70 |j Biochemie: Allgemeines |q VZ |
936 | b | k | |a 42.12 |j Biophysik |q VZ |
951 | |a AR | ||
952 | |d 213 |j 2016 |b 12 |c 1112 |h 84-94 |g 11 | ||
953 | |2 045F |a 610 |
author_variant |
j s js |
---|---|
matchkey_str |
sunjiandeliuxiaocuiwanwenbolijingzhaodon:2016----:iehsigaeoapaacadtetofa |
hierarchy_sort_str |
2016transfer abstract |
bklnumber |
35.70 42.12 |
publishDate |
2016 |
allfields |
10.1016/j.neucom.2016.05.098 doi GBVA2016014000028.pica (DE-627)ELV024602531 (ELSEVIER)S0925-2312(16)30725-1 DE-627 ger DE-627 rakwb eng 610 610 DE-600 570 VZ BIODIV DE-30 fid 35.70 bkl 42.12 bkl Sun, Jiande verfasserin aut Video hashing based on appearance and attention features fusion via DBN 2016transfer abstract 11 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Video hashing has attracted increasing attention in the field of large-scale video retrieval. However, only low-level features or their combinations, referred to as appearance features, are used to generate the video hash in most of the existing video hashing algorithms and these kinds of features are referred to as appearance features. In this paper, a visual attention model is used to extract visual attention features, and the video hash is generated from a fusion of visual-appearance and visual-attention features via a deep belief network (DBN) to obtain representative video features. In addition, hash distance is taken as a vector to measure the similarity between hashes. BER is used as the amplitude of hash distance and the vector cosine similarity is used as the angle of hash distance. Experimental results demonstrate that the fusion of visual appearance and attention features brings about better performance of video hash on recall and precision rates, and the angle of hash distance is useful to improve the accuracy of hash matching. Video hashing has attracted increasing attention in the field of large-scale video retrieval. However, only low-level features or their combinations, referred to as appearance features, are used to generate the video hash in most of the existing video hashing algorithms and these kinds of features are referred to as appearance features. In this paper, a visual attention model is used to extract visual attention features, and the video hash is generated from a fusion of visual-appearance and visual-attention features via a deep belief network (DBN) to obtain representative video features. In addition, hash distance is taken as a vector to measure the similarity between hashes. BER is used as the amplitude of hash distance and the vector cosine similarity is used as the angle of hash distance. Experimental results demonstrate that the fusion of visual appearance and attention features brings about better performance of video hash on recall and precision rates, and the angle of hash distance is useful to improve the accuracy of hash matching. Deep belief network (DBN) Elsevier Video hashing Elsevier Feature fusion Elsevier Angle of hash distance Elsevier Visual attention Elsevier Liu, Xiaocui oth Wan, Wenbo oth Li, Jing oth Zhao, Dong oth Zhang, Huaxiang oth Enthalten in Elsevier Liu, Yang ELSEVIER The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast 2018 an international journal Amsterdam (DE-627)ELV002603926 volume:213 year:2016 day:12 month:11 pages:84-94 extent:11 https://doi.org/10.1016/j.neucom.2016.05.098 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U FID-BIODIV SSG-OLC-PHA 35.70 Biochemie: Allgemeines VZ 42.12 Biophysik VZ AR 213 2016 12 1112 84-94 11 045F 610 |
spelling |
10.1016/j.neucom.2016.05.098 doi GBVA2016014000028.pica (DE-627)ELV024602531 (ELSEVIER)S0925-2312(16)30725-1 DE-627 ger DE-627 rakwb eng 610 610 DE-600 570 VZ BIODIV DE-30 fid 35.70 bkl 42.12 bkl Sun, Jiande verfasserin aut Video hashing based on appearance and attention features fusion via DBN 2016transfer abstract 11 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Video hashing has attracted increasing attention in the field of large-scale video retrieval. However, only low-level features or their combinations, referred to as appearance features, are used to generate the video hash in most of the existing video hashing algorithms and these kinds of features are referred to as appearance features. In this paper, a visual attention model is used to extract visual attention features, and the video hash is generated from a fusion of visual-appearance and visual-attention features via a deep belief network (DBN) to obtain representative video features. In addition, hash distance is taken as a vector to measure the similarity between hashes. BER is used as the amplitude of hash distance and the vector cosine similarity is used as the angle of hash distance. Experimental results demonstrate that the fusion of visual appearance and attention features brings about better performance of video hash on recall and precision rates, and the angle of hash distance is useful to improve the accuracy of hash matching. Video hashing has attracted increasing attention in the field of large-scale video retrieval. However, only low-level features or their combinations, referred to as appearance features, are used to generate the video hash in most of the existing video hashing algorithms and these kinds of features are referred to as appearance features. In this paper, a visual attention model is used to extract visual attention features, and the video hash is generated from a fusion of visual-appearance and visual-attention features via a deep belief network (DBN) to obtain representative video features. In addition, hash distance is taken as a vector to measure the similarity between hashes. BER is used as the amplitude of hash distance and the vector cosine similarity is used as the angle of hash distance. Experimental results demonstrate that the fusion of visual appearance and attention features brings about better performance of video hash on recall and precision rates, and the angle of hash distance is useful to improve the accuracy of hash matching. Deep belief network (DBN) Elsevier Video hashing Elsevier Feature fusion Elsevier Angle of hash distance Elsevier Visual attention Elsevier Liu, Xiaocui oth Wan, Wenbo oth Li, Jing oth Zhao, Dong oth Zhang, Huaxiang oth Enthalten in Elsevier Liu, Yang ELSEVIER The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast 2018 an international journal Amsterdam (DE-627)ELV002603926 volume:213 year:2016 day:12 month:11 pages:84-94 extent:11 https://doi.org/10.1016/j.neucom.2016.05.098 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U FID-BIODIV SSG-OLC-PHA 35.70 Biochemie: Allgemeines VZ 42.12 Biophysik VZ AR 213 2016 12 1112 84-94 11 045F 610 |
allfields_unstemmed |
10.1016/j.neucom.2016.05.098 doi GBVA2016014000028.pica (DE-627)ELV024602531 (ELSEVIER)S0925-2312(16)30725-1 DE-627 ger DE-627 rakwb eng 610 610 DE-600 570 VZ BIODIV DE-30 fid 35.70 bkl 42.12 bkl Sun, Jiande verfasserin aut Video hashing based on appearance and attention features fusion via DBN 2016transfer abstract 11 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Video hashing has attracted increasing attention in the field of large-scale video retrieval. However, only low-level features or their combinations, referred to as appearance features, are used to generate the video hash in most of the existing video hashing algorithms and these kinds of features are referred to as appearance features. In this paper, a visual attention model is used to extract visual attention features, and the video hash is generated from a fusion of visual-appearance and visual-attention features via a deep belief network (DBN) to obtain representative video features. In addition, hash distance is taken as a vector to measure the similarity between hashes. BER is used as the amplitude of hash distance and the vector cosine similarity is used as the angle of hash distance. Experimental results demonstrate that the fusion of visual appearance and attention features brings about better performance of video hash on recall and precision rates, and the angle of hash distance is useful to improve the accuracy of hash matching. Video hashing has attracted increasing attention in the field of large-scale video retrieval. However, only low-level features or their combinations, referred to as appearance features, are used to generate the video hash in most of the existing video hashing algorithms and these kinds of features are referred to as appearance features. In this paper, a visual attention model is used to extract visual attention features, and the video hash is generated from a fusion of visual-appearance and visual-attention features via a deep belief network (DBN) to obtain representative video features. In addition, hash distance is taken as a vector to measure the similarity between hashes. BER is used as the amplitude of hash distance and the vector cosine similarity is used as the angle of hash distance. Experimental results demonstrate that the fusion of visual appearance and attention features brings about better performance of video hash on recall and precision rates, and the angle of hash distance is useful to improve the accuracy of hash matching. Deep belief network (DBN) Elsevier Video hashing Elsevier Feature fusion Elsevier Angle of hash distance Elsevier Visual attention Elsevier Liu, Xiaocui oth Wan, Wenbo oth Li, Jing oth Zhao, Dong oth Zhang, Huaxiang oth Enthalten in Elsevier Liu, Yang ELSEVIER The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast 2018 an international journal Amsterdam (DE-627)ELV002603926 volume:213 year:2016 day:12 month:11 pages:84-94 extent:11 https://doi.org/10.1016/j.neucom.2016.05.098 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U FID-BIODIV SSG-OLC-PHA 35.70 Biochemie: Allgemeines VZ 42.12 Biophysik VZ AR 213 2016 12 1112 84-94 11 045F 610 |
allfieldsGer |
10.1016/j.neucom.2016.05.098 doi GBVA2016014000028.pica (DE-627)ELV024602531 (ELSEVIER)S0925-2312(16)30725-1 DE-627 ger DE-627 rakwb eng 610 610 DE-600 570 VZ BIODIV DE-30 fid 35.70 bkl 42.12 bkl Sun, Jiande verfasserin aut Video hashing based on appearance and attention features fusion via DBN 2016transfer abstract 11 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Video hashing has attracted increasing attention in the field of large-scale video retrieval. However, only low-level features or their combinations, referred to as appearance features, are used to generate the video hash in most of the existing video hashing algorithms and these kinds of features are referred to as appearance features. In this paper, a visual attention model is used to extract visual attention features, and the video hash is generated from a fusion of visual-appearance and visual-attention features via a deep belief network (DBN) to obtain representative video features. In addition, hash distance is taken as a vector to measure the similarity between hashes. BER is used as the amplitude of hash distance and the vector cosine similarity is used as the angle of hash distance. Experimental results demonstrate that the fusion of visual appearance and attention features brings about better performance of video hash on recall and precision rates, and the angle of hash distance is useful to improve the accuracy of hash matching. Video hashing has attracted increasing attention in the field of large-scale video retrieval. However, only low-level features or their combinations, referred to as appearance features, are used to generate the video hash in most of the existing video hashing algorithms and these kinds of features are referred to as appearance features. In this paper, a visual attention model is used to extract visual attention features, and the video hash is generated from a fusion of visual-appearance and visual-attention features via a deep belief network (DBN) to obtain representative video features. In addition, hash distance is taken as a vector to measure the similarity between hashes. BER is used as the amplitude of hash distance and the vector cosine similarity is used as the angle of hash distance. Experimental results demonstrate that the fusion of visual appearance and attention features brings about better performance of video hash on recall and precision rates, and the angle of hash distance is useful to improve the accuracy of hash matching. Deep belief network (DBN) Elsevier Video hashing Elsevier Feature fusion Elsevier Angle of hash distance Elsevier Visual attention Elsevier Liu, Xiaocui oth Wan, Wenbo oth Li, Jing oth Zhao, Dong oth Zhang, Huaxiang oth Enthalten in Elsevier Liu, Yang ELSEVIER The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast 2018 an international journal Amsterdam (DE-627)ELV002603926 volume:213 year:2016 day:12 month:11 pages:84-94 extent:11 https://doi.org/10.1016/j.neucom.2016.05.098 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U FID-BIODIV SSG-OLC-PHA 35.70 Biochemie: Allgemeines VZ 42.12 Biophysik VZ AR 213 2016 12 1112 84-94 11 045F 610 |
allfieldsSound |
10.1016/j.neucom.2016.05.098 doi GBVA2016014000028.pica (DE-627)ELV024602531 (ELSEVIER)S0925-2312(16)30725-1 DE-627 ger DE-627 rakwb eng 610 610 DE-600 570 VZ BIODIV DE-30 fid 35.70 bkl 42.12 bkl Sun, Jiande verfasserin aut Video hashing based on appearance and attention features fusion via DBN 2016transfer abstract 11 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Video hashing has attracted increasing attention in the field of large-scale video retrieval. However, only low-level features or their combinations, referred to as appearance features, are used to generate the video hash in most of the existing video hashing algorithms and these kinds of features are referred to as appearance features. In this paper, a visual attention model is used to extract visual attention features, and the video hash is generated from a fusion of visual-appearance and visual-attention features via a deep belief network (DBN) to obtain representative video features. In addition, hash distance is taken as a vector to measure the similarity between hashes. BER is used as the amplitude of hash distance and the vector cosine similarity is used as the angle of hash distance. Experimental results demonstrate that the fusion of visual appearance and attention features brings about better performance of video hash on recall and precision rates, and the angle of hash distance is useful to improve the accuracy of hash matching. Video hashing has attracted increasing attention in the field of large-scale video retrieval. However, only low-level features or their combinations, referred to as appearance features, are used to generate the video hash in most of the existing video hashing algorithms and these kinds of features are referred to as appearance features. In this paper, a visual attention model is used to extract visual attention features, and the video hash is generated from a fusion of visual-appearance and visual-attention features via a deep belief network (DBN) to obtain representative video features. In addition, hash distance is taken as a vector to measure the similarity between hashes. BER is used as the amplitude of hash distance and the vector cosine similarity is used as the angle of hash distance. Experimental results demonstrate that the fusion of visual appearance and attention features brings about better performance of video hash on recall and precision rates, and the angle of hash distance is useful to improve the accuracy of hash matching. Deep belief network (DBN) Elsevier Video hashing Elsevier Feature fusion Elsevier Angle of hash distance Elsevier Visual attention Elsevier Liu, Xiaocui oth Wan, Wenbo oth Li, Jing oth Zhao, Dong oth Zhang, Huaxiang oth Enthalten in Elsevier Liu, Yang ELSEVIER The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast 2018 an international journal Amsterdam (DE-627)ELV002603926 volume:213 year:2016 day:12 month:11 pages:84-94 extent:11 https://doi.org/10.1016/j.neucom.2016.05.098 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U FID-BIODIV SSG-OLC-PHA 35.70 Biochemie: Allgemeines VZ 42.12 Biophysik VZ AR 213 2016 12 1112 84-94 11 045F 610 |
language |
English |
source |
Enthalten in The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast Amsterdam volume:213 year:2016 day:12 month:11 pages:84-94 extent:11 |
sourceStr |
Enthalten in The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast Amsterdam volume:213 year:2016 day:12 month:11 pages:84-94 extent:11 |
format_phy_str_mv |
Article |
bklname |
Biochemie: Allgemeines Biophysik |
institution |
findex.gbv.de |
topic_facet |
Deep belief network (DBN) Video hashing Feature fusion Angle of hash distance Visual attention |
dewey-raw |
610 |
isfreeaccess_bool |
false |
container_title |
The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast |
authorswithroles_txt_mv |
Sun, Jiande @@aut@@ Liu, Xiaocui @@oth@@ Wan, Wenbo @@oth@@ Li, Jing @@oth@@ Zhao, Dong @@oth@@ Zhang, Huaxiang @@oth@@ |
publishDateDaySort_date |
2016-01-12T00:00:00Z |
hierarchy_top_id |
ELV002603926 |
dewey-sort |
3610 |
id |
ELV024602531 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV024602531</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230625143139.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">180603s2016 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.neucom.2016.05.098</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">GBVA2016014000028.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV024602531</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0925-2312(16)30725-1</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">610</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">DE-600</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">570</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">BIODIV</subfield><subfield code="q">DE-30</subfield><subfield code="2">fid</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">35.70</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">42.12</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Sun, Jiande</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Video hashing based on appearance and attention features fusion via DBN</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2016transfer abstract</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">11</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Video hashing has attracted increasing attention in the field of large-scale video retrieval. However, only low-level features or their combinations, referred to as appearance features, are used to generate the video hash in most of the existing video hashing algorithms and these kinds of features are referred to as appearance features. In this paper, a visual attention model is used to extract visual attention features, and the video hash is generated from a fusion of visual-appearance and visual-attention features via a deep belief network (DBN) to obtain representative video features. In addition, hash distance is taken as a vector to measure the similarity between hashes. BER is used as the amplitude of hash distance and the vector cosine similarity is used as the angle of hash distance. Experimental results demonstrate that the fusion of visual appearance and attention features brings about better performance of video hash on recall and precision rates, and the angle of hash distance is useful to improve the accuracy of hash matching.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Video hashing has attracted increasing attention in the field of large-scale video retrieval. However, only low-level features or their combinations, referred to as appearance features, are used to generate the video hash in most of the existing video hashing algorithms and these kinds of features are referred to as appearance features. In this paper, a visual attention model is used to extract visual attention features, and the video hash is generated from a fusion of visual-appearance and visual-attention features via a deep belief network (DBN) to obtain representative video features. In addition, hash distance is taken as a vector to measure the similarity between hashes. BER is used as the amplitude of hash distance and the vector cosine similarity is used as the angle of hash distance. Experimental results demonstrate that the fusion of visual appearance and attention features brings about better performance of video hash on recall and precision rates, and the angle of hash distance is useful to improve the accuracy of hash matching.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Deep belief network (DBN)</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Video hashing</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Feature fusion</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Angle of hash distance</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Visual attention</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Liu, Xiaocui</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wan, Wenbo</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Jing</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhao, Dong</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhang, Huaxiang</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Elsevier</subfield><subfield code="a">Liu, Yang ELSEVIER</subfield><subfield code="t">The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast</subfield><subfield code="d">2018</subfield><subfield code="d">an international journal</subfield><subfield code="g">Amsterdam</subfield><subfield code="w">(DE-627)ELV002603926</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:213</subfield><subfield code="g">year:2016</subfield><subfield code="g">day:12</subfield><subfield code="g">month:11</subfield><subfield code="g">pages:84-94</subfield><subfield code="g">extent:11</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.neucom.2016.05.098</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">FID-BIODIV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">35.70</subfield><subfield code="j">Biochemie: Allgemeines</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">42.12</subfield><subfield code="j">Biophysik</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">213</subfield><subfield code="j">2016</subfield><subfield code="b">12</subfield><subfield code="c">1112</subfield><subfield code="h">84-94</subfield><subfield code="g">11</subfield></datafield><datafield tag="953" ind1=" " ind2=" "><subfield code="2">045F</subfield><subfield code="a">610</subfield></datafield></record></collection>
|
author |
Sun, Jiande |
spellingShingle |
Sun, Jiande ddc 610 ddc 570 fid BIODIV bkl 35.70 bkl 42.12 Elsevier Deep belief network (DBN) Elsevier Video hashing Elsevier Feature fusion Elsevier Angle of hash distance Elsevier Visual attention Video hashing based on appearance and attention features fusion via DBN |
authorStr |
Sun, Jiande |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)ELV002603926 |
format |
electronic Article |
dewey-ones |
610 - Medicine & health 570 - Life sciences; biology |
delete_txt_mv |
keep |
author_role |
aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
topic_title |
610 610 DE-600 570 VZ BIODIV DE-30 fid 35.70 bkl 42.12 bkl Video hashing based on appearance and attention features fusion via DBN Deep belief network (DBN) Elsevier Video hashing Elsevier Feature fusion Elsevier Angle of hash distance Elsevier Visual attention Elsevier |
topic |
ddc 610 ddc 570 fid BIODIV bkl 35.70 bkl 42.12 Elsevier Deep belief network (DBN) Elsevier Video hashing Elsevier Feature fusion Elsevier Angle of hash distance Elsevier Visual attention |
topic_unstemmed |
ddc 610 ddc 570 fid BIODIV bkl 35.70 bkl 42.12 Elsevier Deep belief network (DBN) Elsevier Video hashing Elsevier Feature fusion Elsevier Angle of hash distance Elsevier Visual attention |
topic_browse |
ddc 610 ddc 570 fid BIODIV bkl 35.70 bkl 42.12 Elsevier Deep belief network (DBN) Elsevier Video hashing Elsevier Feature fusion Elsevier Angle of hash distance Elsevier Visual attention |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
zu |
author2_variant |
x l xl w w ww j l jl d z dz h z hz |
hierarchy_parent_title |
The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast |
hierarchy_parent_id |
ELV002603926 |
dewey-tens |
610 - Medicine & health 570 - Life sciences; biology |
hierarchy_top_title |
The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)ELV002603926 |
title |
Video hashing based on appearance and attention features fusion via DBN |
ctrlnum |
(DE-627)ELV024602531 (ELSEVIER)S0925-2312(16)30725-1 |
title_full |
Video hashing based on appearance and attention features fusion via DBN |
author_sort |
Sun, Jiande |
journal |
The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast |
journalStr |
The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
600 - Technology 500 - Science |
recordtype |
marc |
publishDateSort |
2016 |
contenttype_str_mv |
zzz |
container_start_page |
84 |
author_browse |
Sun, Jiande |
container_volume |
213 |
physical |
11 |
class |
610 610 DE-600 570 VZ BIODIV DE-30 fid 35.70 bkl 42.12 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Sun, Jiande |
doi_str_mv |
10.1016/j.neucom.2016.05.098 |
dewey-full |
610 570 |
title_sort |
video hashing based on appearance and attention features fusion via dbn |
title_auth |
Video hashing based on appearance and attention features fusion via DBN |
abstract |
Video hashing has attracted increasing attention in the field of large-scale video retrieval. However, only low-level features or their combinations, referred to as appearance features, are used to generate the video hash in most of the existing video hashing algorithms and these kinds of features are referred to as appearance features. In this paper, a visual attention model is used to extract visual attention features, and the video hash is generated from a fusion of visual-appearance and visual-attention features via a deep belief network (DBN) to obtain representative video features. In addition, hash distance is taken as a vector to measure the similarity between hashes. BER is used as the amplitude of hash distance and the vector cosine similarity is used as the angle of hash distance. Experimental results demonstrate that the fusion of visual appearance and attention features brings about better performance of video hash on recall and precision rates, and the angle of hash distance is useful to improve the accuracy of hash matching. |
abstractGer |
Video hashing has attracted increasing attention in the field of large-scale video retrieval. However, only low-level features or their combinations, referred to as appearance features, are used to generate the video hash in most of the existing video hashing algorithms and these kinds of features are referred to as appearance features. In this paper, a visual attention model is used to extract visual attention features, and the video hash is generated from a fusion of visual-appearance and visual-attention features via a deep belief network (DBN) to obtain representative video features. In addition, hash distance is taken as a vector to measure the similarity between hashes. BER is used as the amplitude of hash distance and the vector cosine similarity is used as the angle of hash distance. Experimental results demonstrate that the fusion of visual appearance and attention features brings about better performance of video hash on recall and precision rates, and the angle of hash distance is useful to improve the accuracy of hash matching. |
abstract_unstemmed |
Video hashing has attracted increasing attention in the field of large-scale video retrieval. However, only low-level features or their combinations, referred to as appearance features, are used to generate the video hash in most of the existing video hashing algorithms and these kinds of features are referred to as appearance features. In this paper, a visual attention model is used to extract visual attention features, and the video hash is generated from a fusion of visual-appearance and visual-attention features via a deep belief network (DBN) to obtain representative video features. In addition, hash distance is taken as a vector to measure the similarity between hashes. BER is used as the amplitude of hash distance and the vector cosine similarity is used as the angle of hash distance. Experimental results demonstrate that the fusion of visual appearance and attention features brings about better performance of video hash on recall and precision rates, and the angle of hash distance is useful to improve the accuracy of hash matching. |
collection_details |
GBV_USEFLAG_U GBV_ELV SYSFLAG_U FID-BIODIV SSG-OLC-PHA |
title_short |
Video hashing based on appearance and attention features fusion via DBN |
url |
https://doi.org/10.1016/j.neucom.2016.05.098 |
remote_bool |
true |
author2 |
Liu, Xiaocui Wan, Wenbo Li, Jing Zhao, Dong Zhang, Huaxiang |
author2Str |
Liu, Xiaocui Wan, Wenbo Li, Jing Zhao, Dong Zhang, Huaxiang |
ppnlink |
ELV002603926 |
mediatype_str_mv |
z |
isOA_txt |
false |
hochschulschrift_bool |
false |
author2_role |
oth oth oth oth oth |
doi_str |
10.1016/j.neucom.2016.05.098 |
up_date |
2024-07-06T21:52:05.234Z |
_version_ |
1803868150417063936 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV024602531</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230625143139.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">180603s2016 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.neucom.2016.05.098</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">GBVA2016014000028.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV024602531</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0925-2312(16)30725-1</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">610</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">DE-600</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">570</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">BIODIV</subfield><subfield code="q">DE-30</subfield><subfield code="2">fid</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">35.70</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">42.12</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Sun, Jiande</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Video hashing based on appearance and attention features fusion via DBN</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2016transfer abstract</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">11</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Video hashing has attracted increasing attention in the field of large-scale video retrieval. However, only low-level features or their combinations, referred to as appearance features, are used to generate the video hash in most of the existing video hashing algorithms and these kinds of features are referred to as appearance features. In this paper, a visual attention model is used to extract visual attention features, and the video hash is generated from a fusion of visual-appearance and visual-attention features via a deep belief network (DBN) to obtain representative video features. In addition, hash distance is taken as a vector to measure the similarity between hashes. BER is used as the amplitude of hash distance and the vector cosine similarity is used as the angle of hash distance. Experimental results demonstrate that the fusion of visual appearance and attention features brings about better performance of video hash on recall and precision rates, and the angle of hash distance is useful to improve the accuracy of hash matching.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Video hashing has attracted increasing attention in the field of large-scale video retrieval. However, only low-level features or their combinations, referred to as appearance features, are used to generate the video hash in most of the existing video hashing algorithms and these kinds of features are referred to as appearance features. In this paper, a visual attention model is used to extract visual attention features, and the video hash is generated from a fusion of visual-appearance and visual-attention features via a deep belief network (DBN) to obtain representative video features. In addition, hash distance is taken as a vector to measure the similarity between hashes. BER is used as the amplitude of hash distance and the vector cosine similarity is used as the angle of hash distance. Experimental results demonstrate that the fusion of visual appearance and attention features brings about better performance of video hash on recall and precision rates, and the angle of hash distance is useful to improve the accuracy of hash matching.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Deep belief network (DBN)</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Video hashing</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Feature fusion</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Angle of hash distance</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Visual attention</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Liu, Xiaocui</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wan, Wenbo</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Jing</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhao, Dong</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhang, Huaxiang</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Elsevier</subfield><subfield code="a">Liu, Yang ELSEVIER</subfield><subfield code="t">The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast</subfield><subfield code="d">2018</subfield><subfield code="d">an international journal</subfield><subfield code="g">Amsterdam</subfield><subfield code="w">(DE-627)ELV002603926</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:213</subfield><subfield code="g">year:2016</subfield><subfield code="g">day:12</subfield><subfield code="g">month:11</subfield><subfield code="g">pages:84-94</subfield><subfield code="g">extent:11</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.neucom.2016.05.098</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">FID-BIODIV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">35.70</subfield><subfield code="j">Biochemie: Allgemeines</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">42.12</subfield><subfield code="j">Biophysik</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">213</subfield><subfield code="j">2016</subfield><subfield code="b">12</subfield><subfield code="c">1112</subfield><subfield code="h">84-94</subfield><subfield code="g">11</subfield></datafield><datafield tag="953" ind1=" " ind2=" "><subfield code="2">045F</subfield><subfield code="a">610</subfield></datafield></record></collection>
|
score |
7.3997965 |