Multi-Cue Event Information Fusion for Pedestrian Detection With Neuromorphic Vision Sensors
Neuromorphic vision sensors are bio-inspired cameras that naturally capture the dynamics of a scene with ultra-low latency, filtering out redundant information with low power consumption. Few works are addressing the object detection with this sensor. In this work, we propose to develop pedestrian d...
Ausführliche Beschreibung
Autor*in: |
Guang Chen [verfasserIn] Hu Cao [verfasserIn] Canbo Ye [verfasserIn] Zhenyan Zhang [verfasserIn] Xingbo Liu [verfasserIn] Xuhui Mo [verfasserIn] Zhongnan Qu [verfasserIn] Jörg Conradt [verfasserIn] Florian Röhrbein [verfasserIn] Alois Knoll [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2019 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
In: Frontiers in Neurorobotics - Frontiers Media S.A., 2008, 13(2019) |
---|---|
Übergeordnetes Werk: |
volume:13 ; year:2019 |
Links: |
---|
DOI / URN: |
10.3389/fnbot.2019.00010 |
---|
Katalog-ID: |
DOAJ01319125X |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ01319125X | ||
003 | DE-627 | ||
005 | 20230310053734.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230226s2019 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.3389/fnbot.2019.00010 |2 doi | |
035 | |a (DE-627)DOAJ01319125X | ||
035 | |a (DE-599)DOAJ3505b20327c14b64bc5426d94f743807 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a RC321-571 | |
100 | 0 | |a Guang Chen |e verfasserin |4 aut | |
245 | 1 | 0 | |a Multi-Cue Event Information Fusion for Pedestrian Detection With Neuromorphic Vision Sensors |
264 | 1 | |c 2019 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Neuromorphic vision sensors are bio-inspired cameras that naturally capture the dynamics of a scene with ultra-low latency, filtering out redundant information with low power consumption. Few works are addressing the object detection with this sensor. In this work, we propose to develop pedestrian detectors that unlock the potential of the event data by leveraging multi-cue information and different fusion strategies. To make the best out of the event data, we introduce three different event-stream encoding methods based on Frequency, Surface of Active Event (SAE) and Leaky Integrate-and-Fire (LIF). We further integrate them into the state-of-the-art neural network architectures with two fusion approaches: the channel-level fusion of the raw feature space and decision-level fusion with the probability assignments. We present a qualitative and quantitative explanation why different encoding methods are chosen to evaluate the pedestrian detection and which method performs the best. We demonstrate the advantages of the decision-level fusion via leveraging multi-cue event information and show that our approach performs well on a self-annotated event-based pedestrian dataset with 8,736 event frames. This work paves the way of more fascinating perception applications with neuromorphic vision sensors. | ||
650 | 4 | |a neuromorphic vision sensor | |
650 | 4 | |a event-stream encoding | |
650 | 4 | |a object detection | |
650 | 4 | |a convolutional neural network | |
650 | 4 | |a multi-Cue event information fusion | |
653 | 0 | |a Neurosciences. Biological psychiatry. Neuropsychiatry | |
700 | 0 | |a Guang Chen |e verfasserin |4 aut | |
700 | 0 | |a Hu Cao |e verfasserin |4 aut | |
700 | 0 | |a Canbo Ye |e verfasserin |4 aut | |
700 | 0 | |a Zhenyan Zhang |e verfasserin |4 aut | |
700 | 0 | |a Xingbo Liu |e verfasserin |4 aut | |
700 | 0 | |a Xuhui Mo |e verfasserin |4 aut | |
700 | 0 | |a Zhongnan Qu |e verfasserin |4 aut | |
700 | 0 | |a Jörg Conradt |e verfasserin |4 aut | |
700 | 0 | |a Florian Röhrbein |e verfasserin |4 aut | |
700 | 0 | |a Alois Knoll |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t Frontiers in Neurorobotics |d Frontiers Media S.A., 2008 |g 13(2019) |w (DE-627)579826716 |w (DE-600)2453002-5 |x 16625218 |7 nnns |
773 | 1 | 8 | |g volume:13 |g year:2019 |
856 | 4 | 0 | |u https://doi.org/10.3389/fnbot.2019.00010 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/3505b20327c14b64bc5426d94f743807 |z kostenfrei |
856 | 4 | 0 | |u https://www.frontiersin.org/article/10.3389/fnbot.2019.00010/full |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/1662-5218 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_206 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 13 |j 2019 |
author_variant |
g c gc g c gc h c hc c y cy z z zz x l xl x m xm z q zq j c jc f r fr a k ak |
---|---|
matchkey_str |
article:16625218:2019----::utcevnifrainuinopdsradtciniher |
hierarchy_sort_str |
2019 |
callnumber-subject-code |
RC |
publishDate |
2019 |
allfields |
10.3389/fnbot.2019.00010 doi (DE-627)DOAJ01319125X (DE-599)DOAJ3505b20327c14b64bc5426d94f743807 DE-627 ger DE-627 rakwb eng RC321-571 Guang Chen verfasserin aut Multi-Cue Event Information Fusion for Pedestrian Detection With Neuromorphic Vision Sensors 2019 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Neuromorphic vision sensors are bio-inspired cameras that naturally capture the dynamics of a scene with ultra-low latency, filtering out redundant information with low power consumption. Few works are addressing the object detection with this sensor. In this work, we propose to develop pedestrian detectors that unlock the potential of the event data by leveraging multi-cue information and different fusion strategies. To make the best out of the event data, we introduce three different event-stream encoding methods based on Frequency, Surface of Active Event (SAE) and Leaky Integrate-and-Fire (LIF). We further integrate them into the state-of-the-art neural network architectures with two fusion approaches: the channel-level fusion of the raw feature space and decision-level fusion with the probability assignments. We present a qualitative and quantitative explanation why different encoding methods are chosen to evaluate the pedestrian detection and which method performs the best. We demonstrate the advantages of the decision-level fusion via leveraging multi-cue event information and show that our approach performs well on a self-annotated event-based pedestrian dataset with 8,736 event frames. This work paves the way of more fascinating perception applications with neuromorphic vision sensors. neuromorphic vision sensor event-stream encoding object detection convolutional neural network multi-Cue event information fusion Neurosciences. Biological psychiatry. Neuropsychiatry Guang Chen verfasserin aut Hu Cao verfasserin aut Canbo Ye verfasserin aut Zhenyan Zhang verfasserin aut Xingbo Liu verfasserin aut Xuhui Mo verfasserin aut Zhongnan Qu verfasserin aut Jörg Conradt verfasserin aut Florian Röhrbein verfasserin aut Alois Knoll verfasserin aut In Frontiers in Neurorobotics Frontiers Media S.A., 2008 13(2019) (DE-627)579826716 (DE-600)2453002-5 16625218 nnns volume:13 year:2019 https://doi.org/10.3389/fnbot.2019.00010 kostenfrei https://doaj.org/article/3505b20327c14b64bc5426d94f743807 kostenfrei https://www.frontiersin.org/article/10.3389/fnbot.2019.00010/full kostenfrei https://doaj.org/toc/1662-5218 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2003 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 13 2019 |
spelling |
10.3389/fnbot.2019.00010 doi (DE-627)DOAJ01319125X (DE-599)DOAJ3505b20327c14b64bc5426d94f743807 DE-627 ger DE-627 rakwb eng RC321-571 Guang Chen verfasserin aut Multi-Cue Event Information Fusion for Pedestrian Detection With Neuromorphic Vision Sensors 2019 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Neuromorphic vision sensors are bio-inspired cameras that naturally capture the dynamics of a scene with ultra-low latency, filtering out redundant information with low power consumption. Few works are addressing the object detection with this sensor. In this work, we propose to develop pedestrian detectors that unlock the potential of the event data by leveraging multi-cue information and different fusion strategies. To make the best out of the event data, we introduce three different event-stream encoding methods based on Frequency, Surface of Active Event (SAE) and Leaky Integrate-and-Fire (LIF). We further integrate them into the state-of-the-art neural network architectures with two fusion approaches: the channel-level fusion of the raw feature space and decision-level fusion with the probability assignments. We present a qualitative and quantitative explanation why different encoding methods are chosen to evaluate the pedestrian detection and which method performs the best. We demonstrate the advantages of the decision-level fusion via leveraging multi-cue event information and show that our approach performs well on a self-annotated event-based pedestrian dataset with 8,736 event frames. This work paves the way of more fascinating perception applications with neuromorphic vision sensors. neuromorphic vision sensor event-stream encoding object detection convolutional neural network multi-Cue event information fusion Neurosciences. Biological psychiatry. Neuropsychiatry Guang Chen verfasserin aut Hu Cao verfasserin aut Canbo Ye verfasserin aut Zhenyan Zhang verfasserin aut Xingbo Liu verfasserin aut Xuhui Mo verfasserin aut Zhongnan Qu verfasserin aut Jörg Conradt verfasserin aut Florian Röhrbein verfasserin aut Alois Knoll verfasserin aut In Frontiers in Neurorobotics Frontiers Media S.A., 2008 13(2019) (DE-627)579826716 (DE-600)2453002-5 16625218 nnns volume:13 year:2019 https://doi.org/10.3389/fnbot.2019.00010 kostenfrei https://doaj.org/article/3505b20327c14b64bc5426d94f743807 kostenfrei https://www.frontiersin.org/article/10.3389/fnbot.2019.00010/full kostenfrei https://doaj.org/toc/1662-5218 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2003 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 13 2019 |
allfields_unstemmed |
10.3389/fnbot.2019.00010 doi (DE-627)DOAJ01319125X (DE-599)DOAJ3505b20327c14b64bc5426d94f743807 DE-627 ger DE-627 rakwb eng RC321-571 Guang Chen verfasserin aut Multi-Cue Event Information Fusion for Pedestrian Detection With Neuromorphic Vision Sensors 2019 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Neuromorphic vision sensors are bio-inspired cameras that naturally capture the dynamics of a scene with ultra-low latency, filtering out redundant information with low power consumption. Few works are addressing the object detection with this sensor. In this work, we propose to develop pedestrian detectors that unlock the potential of the event data by leveraging multi-cue information and different fusion strategies. To make the best out of the event data, we introduce three different event-stream encoding methods based on Frequency, Surface of Active Event (SAE) and Leaky Integrate-and-Fire (LIF). We further integrate them into the state-of-the-art neural network architectures with two fusion approaches: the channel-level fusion of the raw feature space and decision-level fusion with the probability assignments. We present a qualitative and quantitative explanation why different encoding methods are chosen to evaluate the pedestrian detection and which method performs the best. We demonstrate the advantages of the decision-level fusion via leveraging multi-cue event information and show that our approach performs well on a self-annotated event-based pedestrian dataset with 8,736 event frames. This work paves the way of more fascinating perception applications with neuromorphic vision sensors. neuromorphic vision sensor event-stream encoding object detection convolutional neural network multi-Cue event information fusion Neurosciences. Biological psychiatry. Neuropsychiatry Guang Chen verfasserin aut Hu Cao verfasserin aut Canbo Ye verfasserin aut Zhenyan Zhang verfasserin aut Xingbo Liu verfasserin aut Xuhui Mo verfasserin aut Zhongnan Qu verfasserin aut Jörg Conradt verfasserin aut Florian Röhrbein verfasserin aut Alois Knoll verfasserin aut In Frontiers in Neurorobotics Frontiers Media S.A., 2008 13(2019) (DE-627)579826716 (DE-600)2453002-5 16625218 nnns volume:13 year:2019 https://doi.org/10.3389/fnbot.2019.00010 kostenfrei https://doaj.org/article/3505b20327c14b64bc5426d94f743807 kostenfrei https://www.frontiersin.org/article/10.3389/fnbot.2019.00010/full kostenfrei https://doaj.org/toc/1662-5218 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2003 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 13 2019 |
allfieldsGer |
10.3389/fnbot.2019.00010 doi (DE-627)DOAJ01319125X (DE-599)DOAJ3505b20327c14b64bc5426d94f743807 DE-627 ger DE-627 rakwb eng RC321-571 Guang Chen verfasserin aut Multi-Cue Event Information Fusion for Pedestrian Detection With Neuromorphic Vision Sensors 2019 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Neuromorphic vision sensors are bio-inspired cameras that naturally capture the dynamics of a scene with ultra-low latency, filtering out redundant information with low power consumption. Few works are addressing the object detection with this sensor. In this work, we propose to develop pedestrian detectors that unlock the potential of the event data by leveraging multi-cue information and different fusion strategies. To make the best out of the event data, we introduce three different event-stream encoding methods based on Frequency, Surface of Active Event (SAE) and Leaky Integrate-and-Fire (LIF). We further integrate them into the state-of-the-art neural network architectures with two fusion approaches: the channel-level fusion of the raw feature space and decision-level fusion with the probability assignments. We present a qualitative and quantitative explanation why different encoding methods are chosen to evaluate the pedestrian detection and which method performs the best. We demonstrate the advantages of the decision-level fusion via leveraging multi-cue event information and show that our approach performs well on a self-annotated event-based pedestrian dataset with 8,736 event frames. This work paves the way of more fascinating perception applications with neuromorphic vision sensors. neuromorphic vision sensor event-stream encoding object detection convolutional neural network multi-Cue event information fusion Neurosciences. Biological psychiatry. Neuropsychiatry Guang Chen verfasserin aut Hu Cao verfasserin aut Canbo Ye verfasserin aut Zhenyan Zhang verfasserin aut Xingbo Liu verfasserin aut Xuhui Mo verfasserin aut Zhongnan Qu verfasserin aut Jörg Conradt verfasserin aut Florian Röhrbein verfasserin aut Alois Knoll verfasserin aut In Frontiers in Neurorobotics Frontiers Media S.A., 2008 13(2019) (DE-627)579826716 (DE-600)2453002-5 16625218 nnns volume:13 year:2019 https://doi.org/10.3389/fnbot.2019.00010 kostenfrei https://doaj.org/article/3505b20327c14b64bc5426d94f743807 kostenfrei https://www.frontiersin.org/article/10.3389/fnbot.2019.00010/full kostenfrei https://doaj.org/toc/1662-5218 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2003 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 13 2019 |
allfieldsSound |
10.3389/fnbot.2019.00010 doi (DE-627)DOAJ01319125X (DE-599)DOAJ3505b20327c14b64bc5426d94f743807 DE-627 ger DE-627 rakwb eng RC321-571 Guang Chen verfasserin aut Multi-Cue Event Information Fusion for Pedestrian Detection With Neuromorphic Vision Sensors 2019 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Neuromorphic vision sensors are bio-inspired cameras that naturally capture the dynamics of a scene with ultra-low latency, filtering out redundant information with low power consumption. Few works are addressing the object detection with this sensor. In this work, we propose to develop pedestrian detectors that unlock the potential of the event data by leveraging multi-cue information and different fusion strategies. To make the best out of the event data, we introduce three different event-stream encoding methods based on Frequency, Surface of Active Event (SAE) and Leaky Integrate-and-Fire (LIF). We further integrate them into the state-of-the-art neural network architectures with two fusion approaches: the channel-level fusion of the raw feature space and decision-level fusion with the probability assignments. We present a qualitative and quantitative explanation why different encoding methods are chosen to evaluate the pedestrian detection and which method performs the best. We demonstrate the advantages of the decision-level fusion via leveraging multi-cue event information and show that our approach performs well on a self-annotated event-based pedestrian dataset with 8,736 event frames. This work paves the way of more fascinating perception applications with neuromorphic vision sensors. neuromorphic vision sensor event-stream encoding object detection convolutional neural network multi-Cue event information fusion Neurosciences. Biological psychiatry. Neuropsychiatry Guang Chen verfasserin aut Hu Cao verfasserin aut Canbo Ye verfasserin aut Zhenyan Zhang verfasserin aut Xingbo Liu verfasserin aut Xuhui Mo verfasserin aut Zhongnan Qu verfasserin aut Jörg Conradt verfasserin aut Florian Röhrbein verfasserin aut Alois Knoll verfasserin aut In Frontiers in Neurorobotics Frontiers Media S.A., 2008 13(2019) (DE-627)579826716 (DE-600)2453002-5 16625218 nnns volume:13 year:2019 https://doi.org/10.3389/fnbot.2019.00010 kostenfrei https://doaj.org/article/3505b20327c14b64bc5426d94f743807 kostenfrei https://www.frontiersin.org/article/10.3389/fnbot.2019.00010/full kostenfrei https://doaj.org/toc/1662-5218 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2003 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 13 2019 |
language |
English |
source |
In Frontiers in Neurorobotics 13(2019) volume:13 year:2019 |
sourceStr |
In Frontiers in Neurorobotics 13(2019) volume:13 year:2019 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
neuromorphic vision sensor event-stream encoding object detection convolutional neural network multi-Cue event information fusion Neurosciences. Biological psychiatry. Neuropsychiatry |
isfreeaccess_bool |
true |
container_title |
Frontiers in Neurorobotics |
authorswithroles_txt_mv |
Guang Chen @@aut@@ Hu Cao @@aut@@ Canbo Ye @@aut@@ Zhenyan Zhang @@aut@@ Xingbo Liu @@aut@@ Xuhui Mo @@aut@@ Zhongnan Qu @@aut@@ Jörg Conradt @@aut@@ Florian Röhrbein @@aut@@ Alois Knoll @@aut@@ |
publishDateDaySort_date |
2019-01-01T00:00:00Z |
hierarchy_top_id |
579826716 |
id |
DOAJ01319125X |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ01319125X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230310053734.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230226s2019 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.3389/fnbot.2019.00010</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ01319125X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ3505b20327c14b64bc5426d94f743807</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">RC321-571</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Guang Chen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Multi-Cue Event Information Fusion for Pedestrian Detection With Neuromorphic Vision Sensors</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2019</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Neuromorphic vision sensors are bio-inspired cameras that naturally capture the dynamics of a scene with ultra-low latency, filtering out redundant information with low power consumption. Few works are addressing the object detection with this sensor. In this work, we propose to develop pedestrian detectors that unlock the potential of the event data by leveraging multi-cue information and different fusion strategies. To make the best out of the event data, we introduce three different event-stream encoding methods based on Frequency, Surface of Active Event (SAE) and Leaky Integrate-and-Fire (LIF). We further integrate them into the state-of-the-art neural network architectures with two fusion approaches: the channel-level fusion of the raw feature space and decision-level fusion with the probability assignments. We present a qualitative and quantitative explanation why different encoding methods are chosen to evaluate the pedestrian detection and which method performs the best. We demonstrate the advantages of the decision-level fusion via leveraging multi-cue event information and show that our approach performs well on a self-annotated event-based pedestrian dataset with 8,736 event frames. This work paves the way of more fascinating perception applications with neuromorphic vision sensors.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">neuromorphic vision sensor</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">event-stream encoding</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">object detection</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">convolutional neural network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">multi-Cue event information fusion</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Neurosciences. Biological psychiatry. Neuropsychiatry</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Guang Chen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Hu Cao</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Canbo Ye</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zhenyan Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xingbo Liu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xuhui Mo</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zhongnan Qu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Jörg Conradt</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Florian Röhrbein</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Alois Knoll</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Frontiers in Neurorobotics</subfield><subfield code="d">Frontiers Media S.A., 2008</subfield><subfield code="g">13(2019)</subfield><subfield code="w">(DE-627)579826716</subfield><subfield code="w">(DE-600)2453002-5</subfield><subfield code="x">16625218</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:13</subfield><subfield code="g">year:2019</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.3389/fnbot.2019.00010</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/3505b20327c14b64bc5426d94f743807</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://www.frontiersin.org/article/10.3389/fnbot.2019.00010/full</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1662-5218</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_206</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">13</subfield><subfield code="j">2019</subfield></datafield></record></collection>
|
callnumber-first |
R - Medicine |
author |
Guang Chen |
spellingShingle |
Guang Chen misc RC321-571 misc neuromorphic vision sensor misc event-stream encoding misc object detection misc convolutional neural network misc multi-Cue event information fusion misc Neurosciences. Biological psychiatry. Neuropsychiatry Multi-Cue Event Information Fusion for Pedestrian Detection With Neuromorphic Vision Sensors |
authorStr |
Guang Chen |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)579826716 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut aut aut aut aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
RC321-571 |
illustrated |
Not Illustrated |
issn |
16625218 |
topic_title |
RC321-571 Multi-Cue Event Information Fusion for Pedestrian Detection With Neuromorphic Vision Sensors neuromorphic vision sensor event-stream encoding object detection convolutional neural network multi-Cue event information fusion |
topic |
misc RC321-571 misc neuromorphic vision sensor misc event-stream encoding misc object detection misc convolutional neural network misc multi-Cue event information fusion misc Neurosciences. Biological psychiatry. Neuropsychiatry |
topic_unstemmed |
misc RC321-571 misc neuromorphic vision sensor misc event-stream encoding misc object detection misc convolutional neural network misc multi-Cue event information fusion misc Neurosciences. Biological psychiatry. Neuropsychiatry |
topic_browse |
misc RC321-571 misc neuromorphic vision sensor misc event-stream encoding misc object detection misc convolutional neural network misc multi-Cue event information fusion misc Neurosciences. Biological psychiatry. Neuropsychiatry |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Frontiers in Neurorobotics |
hierarchy_parent_id |
579826716 |
hierarchy_top_title |
Frontiers in Neurorobotics |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)579826716 (DE-600)2453002-5 |
title |
Multi-Cue Event Information Fusion for Pedestrian Detection With Neuromorphic Vision Sensors |
ctrlnum |
(DE-627)DOAJ01319125X (DE-599)DOAJ3505b20327c14b64bc5426d94f743807 |
title_full |
Multi-Cue Event Information Fusion for Pedestrian Detection With Neuromorphic Vision Sensors |
author_sort |
Guang Chen |
journal |
Frontiers in Neurorobotics |
journalStr |
Frontiers in Neurorobotics |
callnumber-first-code |
R |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2019 |
contenttype_str_mv |
txt |
author_browse |
Guang Chen Hu Cao Canbo Ye Zhenyan Zhang Xingbo Liu Xuhui Mo Zhongnan Qu Jörg Conradt Florian Röhrbein Alois Knoll |
container_volume |
13 |
class |
RC321-571 |
format_se |
Elektronische Aufsätze |
author-letter |
Guang Chen |
doi_str_mv |
10.3389/fnbot.2019.00010 |
author2-role |
verfasserin |
title_sort |
multi-cue event information fusion for pedestrian detection with neuromorphic vision sensors |
callnumber |
RC321-571 |
title_auth |
Multi-Cue Event Information Fusion for Pedestrian Detection With Neuromorphic Vision Sensors |
abstract |
Neuromorphic vision sensors are bio-inspired cameras that naturally capture the dynamics of a scene with ultra-low latency, filtering out redundant information with low power consumption. Few works are addressing the object detection with this sensor. In this work, we propose to develop pedestrian detectors that unlock the potential of the event data by leveraging multi-cue information and different fusion strategies. To make the best out of the event data, we introduce three different event-stream encoding methods based on Frequency, Surface of Active Event (SAE) and Leaky Integrate-and-Fire (LIF). We further integrate them into the state-of-the-art neural network architectures with two fusion approaches: the channel-level fusion of the raw feature space and decision-level fusion with the probability assignments. We present a qualitative and quantitative explanation why different encoding methods are chosen to evaluate the pedestrian detection and which method performs the best. We demonstrate the advantages of the decision-level fusion via leveraging multi-cue event information and show that our approach performs well on a self-annotated event-based pedestrian dataset with 8,736 event frames. This work paves the way of more fascinating perception applications with neuromorphic vision sensors. |
abstractGer |
Neuromorphic vision sensors are bio-inspired cameras that naturally capture the dynamics of a scene with ultra-low latency, filtering out redundant information with low power consumption. Few works are addressing the object detection with this sensor. In this work, we propose to develop pedestrian detectors that unlock the potential of the event data by leveraging multi-cue information and different fusion strategies. To make the best out of the event data, we introduce three different event-stream encoding methods based on Frequency, Surface of Active Event (SAE) and Leaky Integrate-and-Fire (LIF). We further integrate them into the state-of-the-art neural network architectures with two fusion approaches: the channel-level fusion of the raw feature space and decision-level fusion with the probability assignments. We present a qualitative and quantitative explanation why different encoding methods are chosen to evaluate the pedestrian detection and which method performs the best. We demonstrate the advantages of the decision-level fusion via leveraging multi-cue event information and show that our approach performs well on a self-annotated event-based pedestrian dataset with 8,736 event frames. This work paves the way of more fascinating perception applications with neuromorphic vision sensors. |
abstract_unstemmed |
Neuromorphic vision sensors are bio-inspired cameras that naturally capture the dynamics of a scene with ultra-low latency, filtering out redundant information with low power consumption. Few works are addressing the object detection with this sensor. In this work, we propose to develop pedestrian detectors that unlock the potential of the event data by leveraging multi-cue information and different fusion strategies. To make the best out of the event data, we introduce three different event-stream encoding methods based on Frequency, Surface of Active Event (SAE) and Leaky Integrate-and-Fire (LIF). We further integrate them into the state-of-the-art neural network architectures with two fusion approaches: the channel-level fusion of the raw feature space and decision-level fusion with the probability assignments. We present a qualitative and quantitative explanation why different encoding methods are chosen to evaluate the pedestrian detection and which method performs the best. We demonstrate the advantages of the decision-level fusion via leveraging multi-cue event information and show that our approach performs well on a self-annotated event-based pedestrian dataset with 8,736 event frames. This work paves the way of more fascinating perception applications with neuromorphic vision sensors. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2003 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
title_short |
Multi-Cue Event Information Fusion for Pedestrian Detection With Neuromorphic Vision Sensors |
url |
https://doi.org/10.3389/fnbot.2019.00010 https://doaj.org/article/3505b20327c14b64bc5426d94f743807 https://www.frontiersin.org/article/10.3389/fnbot.2019.00010/full https://doaj.org/toc/1662-5218 |
remote_bool |
true |
author2 |
Guang Chen Hu Cao Canbo Ye Zhenyan Zhang Xingbo Liu Xuhui Mo Zhongnan Qu Jörg Conradt Florian Röhrbein Alois Knoll |
author2Str |
Guang Chen Hu Cao Canbo Ye Zhenyan Zhang Xingbo Liu Xuhui Mo Zhongnan Qu Jörg Conradt Florian Röhrbein Alois Knoll |
ppnlink |
579826716 |
callnumber-subject |
RC - Internal Medicine |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.3389/fnbot.2019.00010 |
callnumber-a |
RC321-571 |
up_date |
2024-07-03T16:16:14.557Z |
_version_ |
1803575230000529408 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ01319125X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230310053734.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230226s2019 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.3389/fnbot.2019.00010</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ01319125X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ3505b20327c14b64bc5426d94f743807</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">RC321-571</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Guang Chen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Multi-Cue Event Information Fusion for Pedestrian Detection With Neuromorphic Vision Sensors</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2019</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Neuromorphic vision sensors are bio-inspired cameras that naturally capture the dynamics of a scene with ultra-low latency, filtering out redundant information with low power consumption. Few works are addressing the object detection with this sensor. In this work, we propose to develop pedestrian detectors that unlock the potential of the event data by leveraging multi-cue information and different fusion strategies. To make the best out of the event data, we introduce three different event-stream encoding methods based on Frequency, Surface of Active Event (SAE) and Leaky Integrate-and-Fire (LIF). We further integrate them into the state-of-the-art neural network architectures with two fusion approaches: the channel-level fusion of the raw feature space and decision-level fusion with the probability assignments. We present a qualitative and quantitative explanation why different encoding methods are chosen to evaluate the pedestrian detection and which method performs the best. We demonstrate the advantages of the decision-level fusion via leveraging multi-cue event information and show that our approach performs well on a self-annotated event-based pedestrian dataset with 8,736 event frames. This work paves the way of more fascinating perception applications with neuromorphic vision sensors.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">neuromorphic vision sensor</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">event-stream encoding</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">object detection</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">convolutional neural network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">multi-Cue event information fusion</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Neurosciences. Biological psychiatry. Neuropsychiatry</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Guang Chen</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Hu Cao</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Canbo Ye</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zhenyan Zhang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xingbo Liu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xuhui Mo</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zhongnan Qu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Jörg Conradt</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Florian Röhrbein</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Alois Knoll</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Frontiers in Neurorobotics</subfield><subfield code="d">Frontiers Media S.A., 2008</subfield><subfield code="g">13(2019)</subfield><subfield code="w">(DE-627)579826716</subfield><subfield code="w">(DE-600)2453002-5</subfield><subfield code="x">16625218</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:13</subfield><subfield code="g">year:2019</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.3389/fnbot.2019.00010</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/3505b20327c14b64bc5426d94f743807</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://www.frontiersin.org/article/10.3389/fnbot.2019.00010/full</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/1662-5218</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_206</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">13</subfield><subfield code="j">2019</subfield></datafield></record></collection>
|
score |
7.398162 |