Making Anomalies More Anomalous: Video Anomaly Detection Using a Novel Generator and Destroyer
We propose a novel approach for video anomaly detection. Existing video anomaly detection methods train only on normal frames, with the expectation that the quality of the abnormal frames will decrease, and utilize the reconstruction error with the ground truth to detect anomalies. However, a challe...
Ausführliche Beschreibung
Autor*in: |
Seungkyun Hong [verfasserIn] Sunghyun Ahn [verfasserIn] Youngwan Jo [verfasserIn] Sanghyun Park [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2024 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
In: IEEE Access - IEEE, 2014, 12(2024), Seite 36712-36726 |
---|---|
Übergeordnetes Werk: |
volume:12 ; year:2024 ; pages:36712-36726 |
Links: |
---|
DOI / URN: |
10.1109/ACCESS.2024.3374383 |
---|
Katalog-ID: |
DOAJ091381096 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ091381096 | ||
003 | DE-627 | ||
005 | 20240414132415.0 | ||
007 | cr uuu---uuuuu | ||
008 | 240412s2024 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1109/ACCESS.2024.3374383 |2 doi | |
035 | |a (DE-627)DOAJ091381096 | ||
035 | |a (DE-599)DOAJf5689c842b1142ab9f1da4038b0218f4 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a TK1-9971 | |
100 | 0 | |a Seungkyun Hong |e verfasserin |4 aut | |
245 | 1 | 0 | |a Making Anomalies More Anomalous: Video Anomaly Detection Using a Novel Generator and Destroyer |
264 | 1 | |c 2024 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a We propose a novel approach for video anomaly detection. Existing video anomaly detection methods train only on normal frames, with the expectation that the quality of the abnormal frames will decrease, and utilize the reconstruction error with the ground truth to detect anomalies. However, a challenge exists owing to the powerful generalization capability of deep neural networks, as they tend to proficiently generate abnormal frames. To address this issue, we introduce a novel method to make anomalies more anomalous by destroying abnormal areas in abnormal frames. Accordingly, we propose the frame-to-label and motion (F2LM) generator and Destroyer. The F2LM generator predicts a future frame by utilizing the label and motion information of the input frames, thereby degrading the quality of abnormal regions. The Destroyer destroys abnormal regions by transforming low-quality areas into zero vectors. Both models were trained individually, and during testing, the F2LM generator degraded the quality of abnormal regions, and the Destroyer subsequently destroyed these areas. Our proposed video anomaly detection method demonstrated superior performance compared to state-of-the-art models with three benchmark datasets (UCSD Ped2, CUHK Avenue, Shanghai Tech.). Our code and models are available online at <uri<https://github.com/SkiddieAhn/Paper-Making-Anomalies-More-Anomalous</uri<. | ||
650 | 4 | |a Deep learning | |
650 | 4 | |a future frame prediction | |
650 | 4 | |a video anomaly detection | |
650 | 4 | |a video surveillance | |
653 | 0 | |a Electrical engineering. Electronics. Nuclear engineering | |
700 | 0 | |a Sunghyun Ahn |e verfasserin |4 aut | |
700 | 0 | |a Youngwan Jo |e verfasserin |4 aut | |
700 | 0 | |a Sanghyun Park |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t IEEE Access |d IEEE, 2014 |g 12(2024), Seite 36712-36726 |w (DE-627)728440385 |w (DE-600)2687964-5 |x 21693536 |7 nnns |
773 | 1 | 8 | |g volume:12 |g year:2024 |g pages:36712-36726 |
856 | 4 | 0 | |u https://doi.org/10.1109/ACCESS.2024.3374383 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/f5689c842b1142ab9f1da4038b0218f4 |z kostenfrei |
856 | 4 | 0 | |u https://ieeexplore.ieee.org/document/10462109/ |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/2169-3536 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 12 |j 2024 |h 36712-36726 |
author_variant |
s h sh s a sa y j yj s p sp |
---|---|
matchkey_str |
article:21693536:2024----::aignmlemraoaosieaoayeetouignvl |
hierarchy_sort_str |
2024 |
callnumber-subject-code |
TK |
publishDate |
2024 |
allfields |
10.1109/ACCESS.2024.3374383 doi (DE-627)DOAJ091381096 (DE-599)DOAJf5689c842b1142ab9f1da4038b0218f4 DE-627 ger DE-627 rakwb eng TK1-9971 Seungkyun Hong verfasserin aut Making Anomalies More Anomalous: Video Anomaly Detection Using a Novel Generator and Destroyer 2024 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier We propose a novel approach for video anomaly detection. Existing video anomaly detection methods train only on normal frames, with the expectation that the quality of the abnormal frames will decrease, and utilize the reconstruction error with the ground truth to detect anomalies. However, a challenge exists owing to the powerful generalization capability of deep neural networks, as they tend to proficiently generate abnormal frames. To address this issue, we introduce a novel method to make anomalies more anomalous by destroying abnormal areas in abnormal frames. Accordingly, we propose the frame-to-label and motion (F2LM) generator and Destroyer. The F2LM generator predicts a future frame by utilizing the label and motion information of the input frames, thereby degrading the quality of abnormal regions. The Destroyer destroys abnormal regions by transforming low-quality areas into zero vectors. Both models were trained individually, and during testing, the F2LM generator degraded the quality of abnormal regions, and the Destroyer subsequently destroyed these areas. Our proposed video anomaly detection method demonstrated superior performance compared to state-of-the-art models with three benchmark datasets (UCSD Ped2, CUHK Avenue, Shanghai Tech.). Our code and models are available online at <uri<https://github.com/SkiddieAhn/Paper-Making-Anomalies-More-Anomalous</uri<. Deep learning future frame prediction video anomaly detection video surveillance Electrical engineering. Electronics. Nuclear engineering Sunghyun Ahn verfasserin aut Youngwan Jo verfasserin aut Sanghyun Park verfasserin aut In IEEE Access IEEE, 2014 12(2024), Seite 36712-36726 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:12 year:2024 pages:36712-36726 https://doi.org/10.1109/ACCESS.2024.3374383 kostenfrei https://doaj.org/article/f5689c842b1142ab9f1da4038b0218f4 kostenfrei https://ieeexplore.ieee.org/document/10462109/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 12 2024 36712-36726 |
spelling |
10.1109/ACCESS.2024.3374383 doi (DE-627)DOAJ091381096 (DE-599)DOAJf5689c842b1142ab9f1da4038b0218f4 DE-627 ger DE-627 rakwb eng TK1-9971 Seungkyun Hong verfasserin aut Making Anomalies More Anomalous: Video Anomaly Detection Using a Novel Generator and Destroyer 2024 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier We propose a novel approach for video anomaly detection. Existing video anomaly detection methods train only on normal frames, with the expectation that the quality of the abnormal frames will decrease, and utilize the reconstruction error with the ground truth to detect anomalies. However, a challenge exists owing to the powerful generalization capability of deep neural networks, as they tend to proficiently generate abnormal frames. To address this issue, we introduce a novel method to make anomalies more anomalous by destroying abnormal areas in abnormal frames. Accordingly, we propose the frame-to-label and motion (F2LM) generator and Destroyer. The F2LM generator predicts a future frame by utilizing the label and motion information of the input frames, thereby degrading the quality of abnormal regions. The Destroyer destroys abnormal regions by transforming low-quality areas into zero vectors. Both models were trained individually, and during testing, the F2LM generator degraded the quality of abnormal regions, and the Destroyer subsequently destroyed these areas. Our proposed video anomaly detection method demonstrated superior performance compared to state-of-the-art models with three benchmark datasets (UCSD Ped2, CUHK Avenue, Shanghai Tech.). Our code and models are available online at <uri<https://github.com/SkiddieAhn/Paper-Making-Anomalies-More-Anomalous</uri<. Deep learning future frame prediction video anomaly detection video surveillance Electrical engineering. Electronics. Nuclear engineering Sunghyun Ahn verfasserin aut Youngwan Jo verfasserin aut Sanghyun Park verfasserin aut In IEEE Access IEEE, 2014 12(2024), Seite 36712-36726 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:12 year:2024 pages:36712-36726 https://doi.org/10.1109/ACCESS.2024.3374383 kostenfrei https://doaj.org/article/f5689c842b1142ab9f1da4038b0218f4 kostenfrei https://ieeexplore.ieee.org/document/10462109/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 12 2024 36712-36726 |
allfields_unstemmed |
10.1109/ACCESS.2024.3374383 doi (DE-627)DOAJ091381096 (DE-599)DOAJf5689c842b1142ab9f1da4038b0218f4 DE-627 ger DE-627 rakwb eng TK1-9971 Seungkyun Hong verfasserin aut Making Anomalies More Anomalous: Video Anomaly Detection Using a Novel Generator and Destroyer 2024 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier We propose a novel approach for video anomaly detection. Existing video anomaly detection methods train only on normal frames, with the expectation that the quality of the abnormal frames will decrease, and utilize the reconstruction error with the ground truth to detect anomalies. However, a challenge exists owing to the powerful generalization capability of deep neural networks, as they tend to proficiently generate abnormal frames. To address this issue, we introduce a novel method to make anomalies more anomalous by destroying abnormal areas in abnormal frames. Accordingly, we propose the frame-to-label and motion (F2LM) generator and Destroyer. The F2LM generator predicts a future frame by utilizing the label and motion information of the input frames, thereby degrading the quality of abnormal regions. The Destroyer destroys abnormal regions by transforming low-quality areas into zero vectors. Both models were trained individually, and during testing, the F2LM generator degraded the quality of abnormal regions, and the Destroyer subsequently destroyed these areas. Our proposed video anomaly detection method demonstrated superior performance compared to state-of-the-art models with three benchmark datasets (UCSD Ped2, CUHK Avenue, Shanghai Tech.). Our code and models are available online at <uri<https://github.com/SkiddieAhn/Paper-Making-Anomalies-More-Anomalous</uri<. Deep learning future frame prediction video anomaly detection video surveillance Electrical engineering. Electronics. Nuclear engineering Sunghyun Ahn verfasserin aut Youngwan Jo verfasserin aut Sanghyun Park verfasserin aut In IEEE Access IEEE, 2014 12(2024), Seite 36712-36726 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:12 year:2024 pages:36712-36726 https://doi.org/10.1109/ACCESS.2024.3374383 kostenfrei https://doaj.org/article/f5689c842b1142ab9f1da4038b0218f4 kostenfrei https://ieeexplore.ieee.org/document/10462109/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 12 2024 36712-36726 |
allfieldsGer |
10.1109/ACCESS.2024.3374383 doi (DE-627)DOAJ091381096 (DE-599)DOAJf5689c842b1142ab9f1da4038b0218f4 DE-627 ger DE-627 rakwb eng TK1-9971 Seungkyun Hong verfasserin aut Making Anomalies More Anomalous: Video Anomaly Detection Using a Novel Generator and Destroyer 2024 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier We propose a novel approach for video anomaly detection. Existing video anomaly detection methods train only on normal frames, with the expectation that the quality of the abnormal frames will decrease, and utilize the reconstruction error with the ground truth to detect anomalies. However, a challenge exists owing to the powerful generalization capability of deep neural networks, as they tend to proficiently generate abnormal frames. To address this issue, we introduce a novel method to make anomalies more anomalous by destroying abnormal areas in abnormal frames. Accordingly, we propose the frame-to-label and motion (F2LM) generator and Destroyer. The F2LM generator predicts a future frame by utilizing the label and motion information of the input frames, thereby degrading the quality of abnormal regions. The Destroyer destroys abnormal regions by transforming low-quality areas into zero vectors. Both models were trained individually, and during testing, the F2LM generator degraded the quality of abnormal regions, and the Destroyer subsequently destroyed these areas. Our proposed video anomaly detection method demonstrated superior performance compared to state-of-the-art models with three benchmark datasets (UCSD Ped2, CUHK Avenue, Shanghai Tech.). Our code and models are available online at <uri<https://github.com/SkiddieAhn/Paper-Making-Anomalies-More-Anomalous</uri<. Deep learning future frame prediction video anomaly detection video surveillance Electrical engineering. Electronics. Nuclear engineering Sunghyun Ahn verfasserin aut Youngwan Jo verfasserin aut Sanghyun Park verfasserin aut In IEEE Access IEEE, 2014 12(2024), Seite 36712-36726 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:12 year:2024 pages:36712-36726 https://doi.org/10.1109/ACCESS.2024.3374383 kostenfrei https://doaj.org/article/f5689c842b1142ab9f1da4038b0218f4 kostenfrei https://ieeexplore.ieee.org/document/10462109/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 12 2024 36712-36726 |
allfieldsSound |
10.1109/ACCESS.2024.3374383 doi (DE-627)DOAJ091381096 (DE-599)DOAJf5689c842b1142ab9f1da4038b0218f4 DE-627 ger DE-627 rakwb eng TK1-9971 Seungkyun Hong verfasserin aut Making Anomalies More Anomalous: Video Anomaly Detection Using a Novel Generator and Destroyer 2024 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier We propose a novel approach for video anomaly detection. Existing video anomaly detection methods train only on normal frames, with the expectation that the quality of the abnormal frames will decrease, and utilize the reconstruction error with the ground truth to detect anomalies. However, a challenge exists owing to the powerful generalization capability of deep neural networks, as they tend to proficiently generate abnormal frames. To address this issue, we introduce a novel method to make anomalies more anomalous by destroying abnormal areas in abnormal frames. Accordingly, we propose the frame-to-label and motion (F2LM) generator and Destroyer. The F2LM generator predicts a future frame by utilizing the label and motion information of the input frames, thereby degrading the quality of abnormal regions. The Destroyer destroys abnormal regions by transforming low-quality areas into zero vectors. Both models were trained individually, and during testing, the F2LM generator degraded the quality of abnormal regions, and the Destroyer subsequently destroyed these areas. Our proposed video anomaly detection method demonstrated superior performance compared to state-of-the-art models with three benchmark datasets (UCSD Ped2, CUHK Avenue, Shanghai Tech.). Our code and models are available online at <uri<https://github.com/SkiddieAhn/Paper-Making-Anomalies-More-Anomalous</uri<. Deep learning future frame prediction video anomaly detection video surveillance Electrical engineering. Electronics. Nuclear engineering Sunghyun Ahn verfasserin aut Youngwan Jo verfasserin aut Sanghyun Park verfasserin aut In IEEE Access IEEE, 2014 12(2024), Seite 36712-36726 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:12 year:2024 pages:36712-36726 https://doi.org/10.1109/ACCESS.2024.3374383 kostenfrei https://doaj.org/article/f5689c842b1142ab9f1da4038b0218f4 kostenfrei https://ieeexplore.ieee.org/document/10462109/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 12 2024 36712-36726 |
language |
English |
source |
In IEEE Access 12(2024), Seite 36712-36726 volume:12 year:2024 pages:36712-36726 |
sourceStr |
In IEEE Access 12(2024), Seite 36712-36726 volume:12 year:2024 pages:36712-36726 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Deep learning future frame prediction video anomaly detection video surveillance Electrical engineering. Electronics. Nuclear engineering |
isfreeaccess_bool |
true |
container_title |
IEEE Access |
authorswithroles_txt_mv |
Seungkyun Hong @@aut@@ Sunghyun Ahn @@aut@@ Youngwan Jo @@aut@@ Sanghyun Park @@aut@@ |
publishDateDaySort_date |
2024-01-01T00:00:00Z |
hierarchy_top_id |
728440385 |
id |
DOAJ091381096 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ091381096</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240414132415.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">240412s2024 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1109/ACCESS.2024.3374383</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ091381096</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJf5689c842b1142ab9f1da4038b0218f4</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK1-9971</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Seungkyun Hong</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Making Anomalies More Anomalous: Video Anomaly Detection Using a Novel Generator and Destroyer</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2024</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">We propose a novel approach for video anomaly detection. Existing video anomaly detection methods train only on normal frames, with the expectation that the quality of the abnormal frames will decrease, and utilize the reconstruction error with the ground truth to detect anomalies. However, a challenge exists owing to the powerful generalization capability of deep neural networks, as they tend to proficiently generate abnormal frames. To address this issue, we introduce a novel method to make anomalies more anomalous by destroying abnormal areas in abnormal frames. Accordingly, we propose the frame-to-label and motion (F2LM) generator and Destroyer. The F2LM generator predicts a future frame by utilizing the label and motion information of the input frames, thereby degrading the quality of abnormal regions. The Destroyer destroys abnormal regions by transforming low-quality areas into zero vectors. Both models were trained individually, and during testing, the F2LM generator degraded the quality of abnormal regions, and the Destroyer subsequently destroyed these areas. Our proposed video anomaly detection method demonstrated superior performance compared to state-of-the-art models with three benchmark datasets (UCSD Ped2, CUHK Avenue, Shanghai Tech.). Our code and models are available online at <uri<https://github.com/SkiddieAhn/Paper-Making-Anomalies-More-Anomalous</uri<.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Deep learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">future frame prediction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">video anomaly detection</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">video surveillance</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electrical engineering. Electronics. Nuclear engineering</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Sunghyun Ahn</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Youngwan Jo</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Sanghyun Park</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IEEE Access</subfield><subfield code="d">IEEE, 2014</subfield><subfield code="g">12(2024), Seite 36712-36726</subfield><subfield code="w">(DE-627)728440385</subfield><subfield code="w">(DE-600)2687964-5</subfield><subfield code="x">21693536</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:12</subfield><subfield code="g">year:2024</subfield><subfield code="g">pages:36712-36726</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1109/ACCESS.2024.3374383</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/f5689c842b1142ab9f1da4038b0218f4</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ieeexplore.ieee.org/document/10462109/</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2169-3536</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">12</subfield><subfield code="j">2024</subfield><subfield code="h">36712-36726</subfield></datafield></record></collection>
|
callnumber-first |
T - Technology |
author |
Seungkyun Hong |
spellingShingle |
Seungkyun Hong misc TK1-9971 misc Deep learning misc future frame prediction misc video anomaly detection misc video surveillance misc Electrical engineering. Electronics. Nuclear engineering Making Anomalies More Anomalous: Video Anomaly Detection Using a Novel Generator and Destroyer |
authorStr |
Seungkyun Hong |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)728440385 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
TK1-9971 |
illustrated |
Not Illustrated |
issn |
21693536 |
topic_title |
TK1-9971 Making Anomalies More Anomalous: Video Anomaly Detection Using a Novel Generator and Destroyer Deep learning future frame prediction video anomaly detection video surveillance |
topic |
misc TK1-9971 misc Deep learning misc future frame prediction misc video anomaly detection misc video surveillance misc Electrical engineering. Electronics. Nuclear engineering |
topic_unstemmed |
misc TK1-9971 misc Deep learning misc future frame prediction misc video anomaly detection misc video surveillance misc Electrical engineering. Electronics. Nuclear engineering |
topic_browse |
misc TK1-9971 misc Deep learning misc future frame prediction misc video anomaly detection misc video surveillance misc Electrical engineering. Electronics. Nuclear engineering |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
IEEE Access |
hierarchy_parent_id |
728440385 |
hierarchy_top_title |
IEEE Access |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)728440385 (DE-600)2687964-5 |
title |
Making Anomalies More Anomalous: Video Anomaly Detection Using a Novel Generator and Destroyer |
ctrlnum |
(DE-627)DOAJ091381096 (DE-599)DOAJf5689c842b1142ab9f1da4038b0218f4 |
title_full |
Making Anomalies More Anomalous: Video Anomaly Detection Using a Novel Generator and Destroyer |
author_sort |
Seungkyun Hong |
journal |
IEEE Access |
journalStr |
IEEE Access |
callnumber-first-code |
T |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2024 |
contenttype_str_mv |
txt |
container_start_page |
36712 |
author_browse |
Seungkyun Hong Sunghyun Ahn Youngwan Jo Sanghyun Park |
container_volume |
12 |
class |
TK1-9971 |
format_se |
Elektronische Aufsätze |
author-letter |
Seungkyun Hong |
doi_str_mv |
10.1109/ACCESS.2024.3374383 |
author2-role |
verfasserin |
title_sort |
making anomalies more anomalous: video anomaly detection using a novel generator and destroyer |
callnumber |
TK1-9971 |
title_auth |
Making Anomalies More Anomalous: Video Anomaly Detection Using a Novel Generator and Destroyer |
abstract |
We propose a novel approach for video anomaly detection. Existing video anomaly detection methods train only on normal frames, with the expectation that the quality of the abnormal frames will decrease, and utilize the reconstruction error with the ground truth to detect anomalies. However, a challenge exists owing to the powerful generalization capability of deep neural networks, as they tend to proficiently generate abnormal frames. To address this issue, we introduce a novel method to make anomalies more anomalous by destroying abnormal areas in abnormal frames. Accordingly, we propose the frame-to-label and motion (F2LM) generator and Destroyer. The F2LM generator predicts a future frame by utilizing the label and motion information of the input frames, thereby degrading the quality of abnormal regions. The Destroyer destroys abnormal regions by transforming low-quality areas into zero vectors. Both models were trained individually, and during testing, the F2LM generator degraded the quality of abnormal regions, and the Destroyer subsequently destroyed these areas. Our proposed video anomaly detection method demonstrated superior performance compared to state-of-the-art models with three benchmark datasets (UCSD Ped2, CUHK Avenue, Shanghai Tech.). Our code and models are available online at <uri<https://github.com/SkiddieAhn/Paper-Making-Anomalies-More-Anomalous</uri<. |
abstractGer |
We propose a novel approach for video anomaly detection. Existing video anomaly detection methods train only on normal frames, with the expectation that the quality of the abnormal frames will decrease, and utilize the reconstruction error with the ground truth to detect anomalies. However, a challenge exists owing to the powerful generalization capability of deep neural networks, as they tend to proficiently generate abnormal frames. To address this issue, we introduce a novel method to make anomalies more anomalous by destroying abnormal areas in abnormal frames. Accordingly, we propose the frame-to-label and motion (F2LM) generator and Destroyer. The F2LM generator predicts a future frame by utilizing the label and motion information of the input frames, thereby degrading the quality of abnormal regions. The Destroyer destroys abnormal regions by transforming low-quality areas into zero vectors. Both models were trained individually, and during testing, the F2LM generator degraded the quality of abnormal regions, and the Destroyer subsequently destroyed these areas. Our proposed video anomaly detection method demonstrated superior performance compared to state-of-the-art models with three benchmark datasets (UCSD Ped2, CUHK Avenue, Shanghai Tech.). Our code and models are available online at <uri<https://github.com/SkiddieAhn/Paper-Making-Anomalies-More-Anomalous</uri<. |
abstract_unstemmed |
We propose a novel approach for video anomaly detection. Existing video anomaly detection methods train only on normal frames, with the expectation that the quality of the abnormal frames will decrease, and utilize the reconstruction error with the ground truth to detect anomalies. However, a challenge exists owing to the powerful generalization capability of deep neural networks, as they tend to proficiently generate abnormal frames. To address this issue, we introduce a novel method to make anomalies more anomalous by destroying abnormal areas in abnormal frames. Accordingly, we propose the frame-to-label and motion (F2LM) generator and Destroyer. The F2LM generator predicts a future frame by utilizing the label and motion information of the input frames, thereby degrading the quality of abnormal regions. The Destroyer destroys abnormal regions by transforming low-quality areas into zero vectors. Both models were trained individually, and during testing, the F2LM generator degraded the quality of abnormal regions, and the Destroyer subsequently destroyed these areas. Our proposed video anomaly detection method demonstrated superior performance compared to state-of-the-art models with three benchmark datasets (UCSD Ped2, CUHK Avenue, Shanghai Tech.). Our code and models are available online at <uri<https://github.com/SkiddieAhn/Paper-Making-Anomalies-More-Anomalous</uri<. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
title_short |
Making Anomalies More Anomalous: Video Anomaly Detection Using a Novel Generator and Destroyer |
url |
https://doi.org/10.1109/ACCESS.2024.3374383 https://doaj.org/article/f5689c842b1142ab9f1da4038b0218f4 https://ieeexplore.ieee.org/document/10462109/ https://doaj.org/toc/2169-3536 |
remote_bool |
true |
author2 |
Sunghyun Ahn Youngwan Jo Sanghyun Park |
author2Str |
Sunghyun Ahn Youngwan Jo Sanghyun Park |
ppnlink |
728440385 |
callnumber-subject |
TK - Electrical and Nuclear Engineering |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1109/ACCESS.2024.3374383 |
callnumber-a |
TK1-9971 |
up_date |
2024-07-03T20:04:35.466Z |
_version_ |
1803589596446982144 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ091381096</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240414132415.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">240412s2024 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1109/ACCESS.2024.3374383</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ091381096</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJf5689c842b1142ab9f1da4038b0218f4</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK1-9971</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Seungkyun Hong</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Making Anomalies More Anomalous: Video Anomaly Detection Using a Novel Generator and Destroyer</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2024</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">We propose a novel approach for video anomaly detection. Existing video anomaly detection methods train only on normal frames, with the expectation that the quality of the abnormal frames will decrease, and utilize the reconstruction error with the ground truth to detect anomalies. However, a challenge exists owing to the powerful generalization capability of deep neural networks, as they tend to proficiently generate abnormal frames. To address this issue, we introduce a novel method to make anomalies more anomalous by destroying abnormal areas in abnormal frames. Accordingly, we propose the frame-to-label and motion (F2LM) generator and Destroyer. The F2LM generator predicts a future frame by utilizing the label and motion information of the input frames, thereby degrading the quality of abnormal regions. The Destroyer destroys abnormal regions by transforming low-quality areas into zero vectors. Both models were trained individually, and during testing, the F2LM generator degraded the quality of abnormal regions, and the Destroyer subsequently destroyed these areas. Our proposed video anomaly detection method demonstrated superior performance compared to state-of-the-art models with three benchmark datasets (UCSD Ped2, CUHK Avenue, Shanghai Tech.). Our code and models are available online at <uri<https://github.com/SkiddieAhn/Paper-Making-Anomalies-More-Anomalous</uri<.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Deep learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">future frame prediction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">video anomaly detection</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">video surveillance</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electrical engineering. Electronics. Nuclear engineering</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Sunghyun Ahn</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Youngwan Jo</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Sanghyun Park</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IEEE Access</subfield><subfield code="d">IEEE, 2014</subfield><subfield code="g">12(2024), Seite 36712-36726</subfield><subfield code="w">(DE-627)728440385</subfield><subfield code="w">(DE-600)2687964-5</subfield><subfield code="x">21693536</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:12</subfield><subfield code="g">year:2024</subfield><subfield code="g">pages:36712-36726</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1109/ACCESS.2024.3374383</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/f5689c842b1142ab9f1da4038b0218f4</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ieeexplore.ieee.org/document/10462109/</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2169-3536</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">12</subfield><subfield code="j">2024</subfield><subfield code="h">36712-36726</subfield></datafield></record></collection>
|
score |
7.4000454 |