Human Fall Detection Using 3D Multi-Stream Convolutional Neural Networks with Fusion
Human falls, especially for elderly people, can cause serious injuries that might lead to permanent disability. Approximately 20–30% of the aged people in the United States who experienced fall accidents suffer from head trauma, injuries, or bruises. Fall detection is becoming an important public he...
Ausführliche Beschreibung
Autor*in: |
Thamer Alanazi [verfasserIn] Ghulam Muhammad [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
In: Diagnostics - MDPI AG, 2012, 12(2022), 12, p 3060 |
---|---|
Übergeordnetes Werk: |
volume:12 ; year:2022 ; number:12, p 3060 |
Links: |
---|
DOI / URN: |
10.3390/diagnostics12123060 |
---|
Katalog-ID: |
DOAJ08319438X |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ08319438X | ||
003 | DE-627 | ||
005 | 20240414153716.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230311s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.3390/diagnostics12123060 |2 doi | |
035 | |a (DE-627)DOAJ08319438X | ||
035 | |a (DE-599)DOAJ754b72815e824e739713dfbe9b32d1fe | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a R5-920 | |
100 | 0 | |a Thamer Alanazi |e verfasserin |4 aut | |
245 | 1 | 0 | |a Human Fall Detection Using 3D Multi-Stream Convolutional Neural Networks with Fusion |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Human falls, especially for elderly people, can cause serious injuries that might lead to permanent disability. Approximately 20–30% of the aged people in the United States who experienced fall accidents suffer from head trauma, injuries, or bruises. Fall detection is becoming an important public healthcare problem. Timely and accurate fall incident detection could enable the instant delivery of medical services to the injured. New advances in vision-based technologies, including deep learning, have shown significant results in action recognition, where some focus on the detection of fall actions. In this paper, we propose an automatic human fall detection system using multi-stream convolutional neural networks with fusion. The system is based on a multi-level image-fusion approach of every 16 frames of an input video to highlight movement differences within this range. This results of four consecutive preprocessed images are fed to a new proposed and efficient lightweight multi-stream CNN model that is based on a four-branch architecture (4S-3DCNN) that classifies whether there is an incident of a human fall. The evaluation included the use of more than 6392 generated sequences from the Le2i fall detection dataset, which is a publicly available fall video dataset. The proposed method, using three-fold cross-validation to validate generalization and susceptibility to overfitting, achieved a 99.03%, 99.00%, 99.68%, and 99.00% accuracy, sensitivity, specificity, and precision, respectively. The experimental results prove that the proposed model outperforms state-of-the-art models, including GoogleNet, SqueezeNet, ResNet18, and DarkNet19, for fall incident detection. | ||
650 | 4 | |a human fall detection | |
650 | 4 | |a deep learning | |
650 | 4 | |a convolution neural networks | |
650 | 4 | |a fusion networks | |
650 | 4 | |a 3D-CNN and 2D-CNN | |
653 | 0 | |a Medicine (General) | |
700 | 0 | |a Ghulam Muhammad |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t Diagnostics |d MDPI AG, 2012 |g 12(2022), 12, p 3060 |w (DE-627)718627814 |w (DE-600)2662336-5 |x 20754418 |7 nnns |
773 | 1 | 8 | |g volume:12 |g year:2022 |g number:12, p 3060 |
856 | 4 | 0 | |u https://doi.org/10.3390/diagnostics12123060 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/754b72815e824e739713dfbe9b32d1fe |z kostenfrei |
856 | 4 | 0 | |u https://www.mdpi.com/2075-4418/12/12/3060 |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/2075-4418 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_206 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 12 |j 2022 |e 12, p 3060 |
author_variant |
t a ta g m gm |
---|---|
matchkey_str |
article:20754418:2022----::uafldtcinsn3mlitemovltoanua |
hierarchy_sort_str |
2022 |
callnumber-subject-code |
R |
publishDate |
2022 |
allfields |
10.3390/diagnostics12123060 doi (DE-627)DOAJ08319438X (DE-599)DOAJ754b72815e824e739713dfbe9b32d1fe DE-627 ger DE-627 rakwb eng R5-920 Thamer Alanazi verfasserin aut Human Fall Detection Using 3D Multi-Stream Convolutional Neural Networks with Fusion 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Human falls, especially for elderly people, can cause serious injuries that might lead to permanent disability. Approximately 20–30% of the aged people in the United States who experienced fall accidents suffer from head trauma, injuries, or bruises. Fall detection is becoming an important public healthcare problem. Timely and accurate fall incident detection could enable the instant delivery of medical services to the injured. New advances in vision-based technologies, including deep learning, have shown significant results in action recognition, where some focus on the detection of fall actions. In this paper, we propose an automatic human fall detection system using multi-stream convolutional neural networks with fusion. The system is based on a multi-level image-fusion approach of every 16 frames of an input video to highlight movement differences within this range. This results of four consecutive preprocessed images are fed to a new proposed and efficient lightweight multi-stream CNN model that is based on a four-branch architecture (4S-3DCNN) that classifies whether there is an incident of a human fall. The evaluation included the use of more than 6392 generated sequences from the Le2i fall detection dataset, which is a publicly available fall video dataset. The proposed method, using three-fold cross-validation to validate generalization and susceptibility to overfitting, achieved a 99.03%, 99.00%, 99.68%, and 99.00% accuracy, sensitivity, specificity, and precision, respectively. The experimental results prove that the proposed model outperforms state-of-the-art models, including GoogleNet, SqueezeNet, ResNet18, and DarkNet19, for fall incident detection. human fall detection deep learning convolution neural networks fusion networks 3D-CNN and 2D-CNN Medicine (General) Ghulam Muhammad verfasserin aut In Diagnostics MDPI AG, 2012 12(2022), 12, p 3060 (DE-627)718627814 (DE-600)2662336-5 20754418 nnns volume:12 year:2022 number:12, p 3060 https://doi.org/10.3390/diagnostics12123060 kostenfrei https://doaj.org/article/754b72815e824e739713dfbe9b32d1fe kostenfrei https://www.mdpi.com/2075-4418/12/12/3060 kostenfrei https://doaj.org/toc/2075-4418 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2111 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 12 2022 12, p 3060 |
spelling |
10.3390/diagnostics12123060 doi (DE-627)DOAJ08319438X (DE-599)DOAJ754b72815e824e739713dfbe9b32d1fe DE-627 ger DE-627 rakwb eng R5-920 Thamer Alanazi verfasserin aut Human Fall Detection Using 3D Multi-Stream Convolutional Neural Networks with Fusion 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Human falls, especially for elderly people, can cause serious injuries that might lead to permanent disability. Approximately 20–30% of the aged people in the United States who experienced fall accidents suffer from head trauma, injuries, or bruises. Fall detection is becoming an important public healthcare problem. Timely and accurate fall incident detection could enable the instant delivery of medical services to the injured. New advances in vision-based technologies, including deep learning, have shown significant results in action recognition, where some focus on the detection of fall actions. In this paper, we propose an automatic human fall detection system using multi-stream convolutional neural networks with fusion. The system is based on a multi-level image-fusion approach of every 16 frames of an input video to highlight movement differences within this range. This results of four consecutive preprocessed images are fed to a new proposed and efficient lightweight multi-stream CNN model that is based on a four-branch architecture (4S-3DCNN) that classifies whether there is an incident of a human fall. The evaluation included the use of more than 6392 generated sequences from the Le2i fall detection dataset, which is a publicly available fall video dataset. The proposed method, using three-fold cross-validation to validate generalization and susceptibility to overfitting, achieved a 99.03%, 99.00%, 99.68%, and 99.00% accuracy, sensitivity, specificity, and precision, respectively. The experimental results prove that the proposed model outperforms state-of-the-art models, including GoogleNet, SqueezeNet, ResNet18, and DarkNet19, for fall incident detection. human fall detection deep learning convolution neural networks fusion networks 3D-CNN and 2D-CNN Medicine (General) Ghulam Muhammad verfasserin aut In Diagnostics MDPI AG, 2012 12(2022), 12, p 3060 (DE-627)718627814 (DE-600)2662336-5 20754418 nnns volume:12 year:2022 number:12, p 3060 https://doi.org/10.3390/diagnostics12123060 kostenfrei https://doaj.org/article/754b72815e824e739713dfbe9b32d1fe kostenfrei https://www.mdpi.com/2075-4418/12/12/3060 kostenfrei https://doaj.org/toc/2075-4418 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2111 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 12 2022 12, p 3060 |
allfields_unstemmed |
10.3390/diagnostics12123060 doi (DE-627)DOAJ08319438X (DE-599)DOAJ754b72815e824e739713dfbe9b32d1fe DE-627 ger DE-627 rakwb eng R5-920 Thamer Alanazi verfasserin aut Human Fall Detection Using 3D Multi-Stream Convolutional Neural Networks with Fusion 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Human falls, especially for elderly people, can cause serious injuries that might lead to permanent disability. Approximately 20–30% of the aged people in the United States who experienced fall accidents suffer from head trauma, injuries, or bruises. Fall detection is becoming an important public healthcare problem. Timely and accurate fall incident detection could enable the instant delivery of medical services to the injured. New advances in vision-based technologies, including deep learning, have shown significant results in action recognition, where some focus on the detection of fall actions. In this paper, we propose an automatic human fall detection system using multi-stream convolutional neural networks with fusion. The system is based on a multi-level image-fusion approach of every 16 frames of an input video to highlight movement differences within this range. This results of four consecutive preprocessed images are fed to a new proposed and efficient lightweight multi-stream CNN model that is based on a four-branch architecture (4S-3DCNN) that classifies whether there is an incident of a human fall. The evaluation included the use of more than 6392 generated sequences from the Le2i fall detection dataset, which is a publicly available fall video dataset. The proposed method, using three-fold cross-validation to validate generalization and susceptibility to overfitting, achieved a 99.03%, 99.00%, 99.68%, and 99.00% accuracy, sensitivity, specificity, and precision, respectively. The experimental results prove that the proposed model outperforms state-of-the-art models, including GoogleNet, SqueezeNet, ResNet18, and DarkNet19, for fall incident detection. human fall detection deep learning convolution neural networks fusion networks 3D-CNN and 2D-CNN Medicine (General) Ghulam Muhammad verfasserin aut In Diagnostics MDPI AG, 2012 12(2022), 12, p 3060 (DE-627)718627814 (DE-600)2662336-5 20754418 nnns volume:12 year:2022 number:12, p 3060 https://doi.org/10.3390/diagnostics12123060 kostenfrei https://doaj.org/article/754b72815e824e739713dfbe9b32d1fe kostenfrei https://www.mdpi.com/2075-4418/12/12/3060 kostenfrei https://doaj.org/toc/2075-4418 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2111 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 12 2022 12, p 3060 |
allfieldsGer |
10.3390/diagnostics12123060 doi (DE-627)DOAJ08319438X (DE-599)DOAJ754b72815e824e739713dfbe9b32d1fe DE-627 ger DE-627 rakwb eng R5-920 Thamer Alanazi verfasserin aut Human Fall Detection Using 3D Multi-Stream Convolutional Neural Networks with Fusion 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Human falls, especially for elderly people, can cause serious injuries that might lead to permanent disability. Approximately 20–30% of the aged people in the United States who experienced fall accidents suffer from head trauma, injuries, or bruises. Fall detection is becoming an important public healthcare problem. Timely and accurate fall incident detection could enable the instant delivery of medical services to the injured. New advances in vision-based technologies, including deep learning, have shown significant results in action recognition, where some focus on the detection of fall actions. In this paper, we propose an automatic human fall detection system using multi-stream convolutional neural networks with fusion. The system is based on a multi-level image-fusion approach of every 16 frames of an input video to highlight movement differences within this range. This results of four consecutive preprocessed images are fed to a new proposed and efficient lightweight multi-stream CNN model that is based on a four-branch architecture (4S-3DCNN) that classifies whether there is an incident of a human fall. The evaluation included the use of more than 6392 generated sequences from the Le2i fall detection dataset, which is a publicly available fall video dataset. The proposed method, using three-fold cross-validation to validate generalization and susceptibility to overfitting, achieved a 99.03%, 99.00%, 99.68%, and 99.00% accuracy, sensitivity, specificity, and precision, respectively. The experimental results prove that the proposed model outperforms state-of-the-art models, including GoogleNet, SqueezeNet, ResNet18, and DarkNet19, for fall incident detection. human fall detection deep learning convolution neural networks fusion networks 3D-CNN and 2D-CNN Medicine (General) Ghulam Muhammad verfasserin aut In Diagnostics MDPI AG, 2012 12(2022), 12, p 3060 (DE-627)718627814 (DE-600)2662336-5 20754418 nnns volume:12 year:2022 number:12, p 3060 https://doi.org/10.3390/diagnostics12123060 kostenfrei https://doaj.org/article/754b72815e824e739713dfbe9b32d1fe kostenfrei https://www.mdpi.com/2075-4418/12/12/3060 kostenfrei https://doaj.org/toc/2075-4418 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2111 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 12 2022 12, p 3060 |
allfieldsSound |
10.3390/diagnostics12123060 doi (DE-627)DOAJ08319438X (DE-599)DOAJ754b72815e824e739713dfbe9b32d1fe DE-627 ger DE-627 rakwb eng R5-920 Thamer Alanazi verfasserin aut Human Fall Detection Using 3D Multi-Stream Convolutional Neural Networks with Fusion 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Human falls, especially for elderly people, can cause serious injuries that might lead to permanent disability. Approximately 20–30% of the aged people in the United States who experienced fall accidents suffer from head trauma, injuries, or bruises. Fall detection is becoming an important public healthcare problem. Timely and accurate fall incident detection could enable the instant delivery of medical services to the injured. New advances in vision-based technologies, including deep learning, have shown significant results in action recognition, where some focus on the detection of fall actions. In this paper, we propose an automatic human fall detection system using multi-stream convolutional neural networks with fusion. The system is based on a multi-level image-fusion approach of every 16 frames of an input video to highlight movement differences within this range. This results of four consecutive preprocessed images are fed to a new proposed and efficient lightweight multi-stream CNN model that is based on a four-branch architecture (4S-3DCNN) that classifies whether there is an incident of a human fall. The evaluation included the use of more than 6392 generated sequences from the Le2i fall detection dataset, which is a publicly available fall video dataset. The proposed method, using three-fold cross-validation to validate generalization and susceptibility to overfitting, achieved a 99.03%, 99.00%, 99.68%, and 99.00% accuracy, sensitivity, specificity, and precision, respectively. The experimental results prove that the proposed model outperforms state-of-the-art models, including GoogleNet, SqueezeNet, ResNet18, and DarkNet19, for fall incident detection. human fall detection deep learning convolution neural networks fusion networks 3D-CNN and 2D-CNN Medicine (General) Ghulam Muhammad verfasserin aut In Diagnostics MDPI AG, 2012 12(2022), 12, p 3060 (DE-627)718627814 (DE-600)2662336-5 20754418 nnns volume:12 year:2022 number:12, p 3060 https://doi.org/10.3390/diagnostics12123060 kostenfrei https://doaj.org/article/754b72815e824e739713dfbe9b32d1fe kostenfrei https://www.mdpi.com/2075-4418/12/12/3060 kostenfrei https://doaj.org/toc/2075-4418 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2111 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 12 2022 12, p 3060 |
language |
English |
source |
In Diagnostics 12(2022), 12, p 3060 volume:12 year:2022 number:12, p 3060 |
sourceStr |
In Diagnostics 12(2022), 12, p 3060 volume:12 year:2022 number:12, p 3060 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
human fall detection deep learning convolution neural networks fusion networks 3D-CNN and 2D-CNN Medicine (General) |
isfreeaccess_bool |
true |
container_title |
Diagnostics |
authorswithroles_txt_mv |
Thamer Alanazi @@aut@@ Ghulam Muhammad @@aut@@ |
publishDateDaySort_date |
2022-01-01T00:00:00Z |
hierarchy_top_id |
718627814 |
id |
DOAJ08319438X |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ08319438X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240414153716.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230311s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.3390/diagnostics12123060</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ08319438X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ754b72815e824e739713dfbe9b32d1fe</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">R5-920</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Thamer Alanazi</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Human Fall Detection Using 3D Multi-Stream Convolutional Neural Networks with Fusion</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Human falls, especially for elderly people, can cause serious injuries that might lead to permanent disability. Approximately 20–30% of the aged people in the United States who experienced fall accidents suffer from head trauma, injuries, or bruises. Fall detection is becoming an important public healthcare problem. Timely and accurate fall incident detection could enable the instant delivery of medical services to the injured. New advances in vision-based technologies, including deep learning, have shown significant results in action recognition, where some focus on the detection of fall actions. In this paper, we propose an automatic human fall detection system using multi-stream convolutional neural networks with fusion. The system is based on a multi-level image-fusion approach of every 16 frames of an input video to highlight movement differences within this range. This results of four consecutive preprocessed images are fed to a new proposed and efficient lightweight multi-stream CNN model that is based on a four-branch architecture (4S-3DCNN) that classifies whether there is an incident of a human fall. The evaluation included the use of more than 6392 generated sequences from the Le2i fall detection dataset, which is a publicly available fall video dataset. The proposed method, using three-fold cross-validation to validate generalization and susceptibility to overfitting, achieved a 99.03%, 99.00%, 99.68%, and 99.00% accuracy, sensitivity, specificity, and precision, respectively. The experimental results prove that the proposed model outperforms state-of-the-art models, including GoogleNet, SqueezeNet, ResNet18, and DarkNet19, for fall incident detection.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">human fall detection</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">deep learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">convolution neural networks</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">fusion networks</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">3D-CNN and 2D-CNN</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Medicine (General)</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Ghulam Muhammad</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Diagnostics</subfield><subfield code="d">MDPI AG, 2012</subfield><subfield code="g">12(2022), 12, p 3060</subfield><subfield code="w">(DE-627)718627814</subfield><subfield code="w">(DE-600)2662336-5</subfield><subfield code="x">20754418</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:12</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:12, p 3060</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.3390/diagnostics12123060</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/754b72815e824e739713dfbe9b32d1fe</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://www.mdpi.com/2075-4418/12/12/3060</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2075-4418</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_206</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">12</subfield><subfield code="j">2022</subfield><subfield code="e">12, p 3060</subfield></datafield></record></collection>
|
callnumber-first |
R - Medicine |
author |
Thamer Alanazi |
spellingShingle |
Thamer Alanazi misc R5-920 misc human fall detection misc deep learning misc convolution neural networks misc fusion networks misc 3D-CNN and 2D-CNN misc Medicine (General) Human Fall Detection Using 3D Multi-Stream Convolutional Neural Networks with Fusion |
authorStr |
Thamer Alanazi |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)718627814 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
R5-920 |
illustrated |
Not Illustrated |
issn |
20754418 |
topic_title |
R5-920 Human Fall Detection Using 3D Multi-Stream Convolutional Neural Networks with Fusion human fall detection deep learning convolution neural networks fusion networks 3D-CNN and 2D-CNN |
topic |
misc R5-920 misc human fall detection misc deep learning misc convolution neural networks misc fusion networks misc 3D-CNN and 2D-CNN misc Medicine (General) |
topic_unstemmed |
misc R5-920 misc human fall detection misc deep learning misc convolution neural networks misc fusion networks misc 3D-CNN and 2D-CNN misc Medicine (General) |
topic_browse |
misc R5-920 misc human fall detection misc deep learning misc convolution neural networks misc fusion networks misc 3D-CNN and 2D-CNN misc Medicine (General) |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Diagnostics |
hierarchy_parent_id |
718627814 |
hierarchy_top_title |
Diagnostics |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)718627814 (DE-600)2662336-5 |
title |
Human Fall Detection Using 3D Multi-Stream Convolutional Neural Networks with Fusion |
ctrlnum |
(DE-627)DOAJ08319438X (DE-599)DOAJ754b72815e824e739713dfbe9b32d1fe |
title_full |
Human Fall Detection Using 3D Multi-Stream Convolutional Neural Networks with Fusion |
author_sort |
Thamer Alanazi |
journal |
Diagnostics |
journalStr |
Diagnostics |
callnumber-first-code |
R |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
txt |
author_browse |
Thamer Alanazi Ghulam Muhammad |
container_volume |
12 |
class |
R5-920 |
format_se |
Elektronische Aufsätze |
author-letter |
Thamer Alanazi |
doi_str_mv |
10.3390/diagnostics12123060 |
author2-role |
verfasserin |
title_sort |
human fall detection using 3d multi-stream convolutional neural networks with fusion |
callnumber |
R5-920 |
title_auth |
Human Fall Detection Using 3D Multi-Stream Convolutional Neural Networks with Fusion |
abstract |
Human falls, especially for elderly people, can cause serious injuries that might lead to permanent disability. Approximately 20–30% of the aged people in the United States who experienced fall accidents suffer from head trauma, injuries, or bruises. Fall detection is becoming an important public healthcare problem. Timely and accurate fall incident detection could enable the instant delivery of medical services to the injured. New advances in vision-based technologies, including deep learning, have shown significant results in action recognition, where some focus on the detection of fall actions. In this paper, we propose an automatic human fall detection system using multi-stream convolutional neural networks with fusion. The system is based on a multi-level image-fusion approach of every 16 frames of an input video to highlight movement differences within this range. This results of four consecutive preprocessed images are fed to a new proposed and efficient lightweight multi-stream CNN model that is based on a four-branch architecture (4S-3DCNN) that classifies whether there is an incident of a human fall. The evaluation included the use of more than 6392 generated sequences from the Le2i fall detection dataset, which is a publicly available fall video dataset. The proposed method, using three-fold cross-validation to validate generalization and susceptibility to overfitting, achieved a 99.03%, 99.00%, 99.68%, and 99.00% accuracy, sensitivity, specificity, and precision, respectively. The experimental results prove that the proposed model outperforms state-of-the-art models, including GoogleNet, SqueezeNet, ResNet18, and DarkNet19, for fall incident detection. |
abstractGer |
Human falls, especially for elderly people, can cause serious injuries that might lead to permanent disability. Approximately 20–30% of the aged people in the United States who experienced fall accidents suffer from head trauma, injuries, or bruises. Fall detection is becoming an important public healthcare problem. Timely and accurate fall incident detection could enable the instant delivery of medical services to the injured. New advances in vision-based technologies, including deep learning, have shown significant results in action recognition, where some focus on the detection of fall actions. In this paper, we propose an automatic human fall detection system using multi-stream convolutional neural networks with fusion. The system is based on a multi-level image-fusion approach of every 16 frames of an input video to highlight movement differences within this range. This results of four consecutive preprocessed images are fed to a new proposed and efficient lightweight multi-stream CNN model that is based on a four-branch architecture (4S-3DCNN) that classifies whether there is an incident of a human fall. The evaluation included the use of more than 6392 generated sequences from the Le2i fall detection dataset, which is a publicly available fall video dataset. The proposed method, using three-fold cross-validation to validate generalization and susceptibility to overfitting, achieved a 99.03%, 99.00%, 99.68%, and 99.00% accuracy, sensitivity, specificity, and precision, respectively. The experimental results prove that the proposed model outperforms state-of-the-art models, including GoogleNet, SqueezeNet, ResNet18, and DarkNet19, for fall incident detection. |
abstract_unstemmed |
Human falls, especially for elderly people, can cause serious injuries that might lead to permanent disability. Approximately 20–30% of the aged people in the United States who experienced fall accidents suffer from head trauma, injuries, or bruises. Fall detection is becoming an important public healthcare problem. Timely and accurate fall incident detection could enable the instant delivery of medical services to the injured. New advances in vision-based technologies, including deep learning, have shown significant results in action recognition, where some focus on the detection of fall actions. In this paper, we propose an automatic human fall detection system using multi-stream convolutional neural networks with fusion. The system is based on a multi-level image-fusion approach of every 16 frames of an input video to highlight movement differences within this range. This results of four consecutive preprocessed images are fed to a new proposed and efficient lightweight multi-stream CNN model that is based on a four-branch architecture (4S-3DCNN) that classifies whether there is an incident of a human fall. The evaluation included the use of more than 6392 generated sequences from the Le2i fall detection dataset, which is a publicly available fall video dataset. The proposed method, using three-fold cross-validation to validate generalization and susceptibility to overfitting, achieved a 99.03%, 99.00%, 99.68%, and 99.00% accuracy, sensitivity, specificity, and precision, respectively. The experimental results prove that the proposed model outperforms state-of-the-art models, including GoogleNet, SqueezeNet, ResNet18, and DarkNet19, for fall incident detection. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_73 GBV_ILN_74 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_206 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_602 GBV_ILN_2005 GBV_ILN_2009 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2055 GBV_ILN_2111 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
container_issue |
12, p 3060 |
title_short |
Human Fall Detection Using 3D Multi-Stream Convolutional Neural Networks with Fusion |
url |
https://doi.org/10.3390/diagnostics12123060 https://doaj.org/article/754b72815e824e739713dfbe9b32d1fe https://www.mdpi.com/2075-4418/12/12/3060 https://doaj.org/toc/2075-4418 |
remote_bool |
true |
author2 |
Ghulam Muhammad |
author2Str |
Ghulam Muhammad |
ppnlink |
718627814 |
callnumber-subject |
R - General Medicine |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.3390/diagnostics12123060 |
callnumber-a |
R5-920 |
up_date |
2024-07-03T16:05:07.723Z |
_version_ |
1803574530777546753 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ08319438X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240414153716.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230311s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.3390/diagnostics12123060</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ08319438X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ754b72815e824e739713dfbe9b32d1fe</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">R5-920</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Thamer Alanazi</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Human Fall Detection Using 3D Multi-Stream Convolutional Neural Networks with Fusion</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Human falls, especially for elderly people, can cause serious injuries that might lead to permanent disability. Approximately 20–30% of the aged people in the United States who experienced fall accidents suffer from head trauma, injuries, or bruises. Fall detection is becoming an important public healthcare problem. Timely and accurate fall incident detection could enable the instant delivery of medical services to the injured. New advances in vision-based technologies, including deep learning, have shown significant results in action recognition, where some focus on the detection of fall actions. In this paper, we propose an automatic human fall detection system using multi-stream convolutional neural networks with fusion. The system is based on a multi-level image-fusion approach of every 16 frames of an input video to highlight movement differences within this range. This results of four consecutive preprocessed images are fed to a new proposed and efficient lightweight multi-stream CNN model that is based on a four-branch architecture (4S-3DCNN) that classifies whether there is an incident of a human fall. The evaluation included the use of more than 6392 generated sequences from the Le2i fall detection dataset, which is a publicly available fall video dataset. The proposed method, using three-fold cross-validation to validate generalization and susceptibility to overfitting, achieved a 99.03%, 99.00%, 99.68%, and 99.00% accuracy, sensitivity, specificity, and precision, respectively. The experimental results prove that the proposed model outperforms state-of-the-art models, including GoogleNet, SqueezeNet, ResNet18, and DarkNet19, for fall incident detection.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">human fall detection</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">deep learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">convolution neural networks</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">fusion networks</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">3D-CNN and 2D-CNN</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Medicine (General)</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Ghulam Muhammad</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Diagnostics</subfield><subfield code="d">MDPI AG, 2012</subfield><subfield code="g">12(2022), 12, p 3060</subfield><subfield code="w">(DE-627)718627814</subfield><subfield code="w">(DE-600)2662336-5</subfield><subfield code="x">20754418</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:12</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:12, p 3060</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.3390/diagnostics12123060</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/754b72815e824e739713dfbe9b32d1fe</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://www.mdpi.com/2075-4418/12/12/3060</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2075-4418</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_206</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">12</subfield><subfield code="j">2022</subfield><subfield code="e">12, p 3060</subfield></datafield></record></collection>
|
score |
7.398512 |