GsNeRF: Fast novel view synthesis of dynamic radiance fields
Synthesis of new views in dynamic 3D scenes is a challenging task in 3D vision. However, most current approaches rely on radiance fields built upon multi-view camera systems for dynamic scenes, which are expensive and time-consuming, and depend on implicit representations based on Neural Radiance Fi...
Ausführliche Beschreibung
Autor*in: |
Liu, Dezhi [verfasserIn] Wan, Weibing [verfasserIn] Fang, Zhijun [verfasserIn] Zheng, Xiuyuan [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2023 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
Enthalten in: Computers & graphics - Amsterdam [u.a.] : Elsevier Science, 1975, 116, Seite 491-499 |
---|---|
Übergeordnetes Werk: |
volume:116 ; pages:491-499 |
DOI / URN: |
10.1016/j.cag.2023.10.002 |
---|
Katalog-ID: |
ELV066070643 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV066070643 | ||
003 | DE-627 | ||
005 | 20240112093052.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231209s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.cag.2023.10.002 |2 doi | |
035 | |a (DE-627)ELV066070643 | ||
035 | |a (ELSEVIER)S0097-8493(23)00239-X | ||
040 | |a DE-627 |b ger |c DE-627 |e rda | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |q VZ |
084 | |a 54.73 |2 bkl | ||
100 | 1 | |a Liu, Dezhi |e verfasserin |0 (orcid)0009-0002-3386-625X |4 aut | |
245 | 1 | 0 | |a GsNeRF: Fast novel view synthesis of dynamic radiance fields |
264 | 1 | |c 2023 | |
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Synthesis of new views in dynamic 3D scenes is a challenging task in 3D vision. However, most current approaches rely on radiance fields built upon multi-view camera systems for dynamic scenes, which are expensive and time-consuming, and depend on implicit representations based on Neural Radiance Fields . In this study, we introduced a new representation method for dynamic scenes called GsNeRF, which allows for fast reconstruction of objects under motion by moving a single camera in the scene, and it can render high-quality views from arbitrary time frames and camera poses. We utilize five grids to represent the dynamic scene, employing tensor decomposition for each grid to reduce storage space usage. Since the primary task of the entire model is to optimize these planes, we enforce spatio-temporal continuity of these planes through a smoothness loss. GsNeRF is combined with a miniature MLP to regress color outputs and trained using volume rendering. Through testing on a series of synthetic and real datasets, our method reduces the training time by over 100 times compared to implicit methods and achieves better rendering quality compared to explicit methods. Our approach achieves a balance between memory usage, speed, and quality overall. | ||
650 | 4 | |a Neural rendering | |
650 | 4 | |a Novel view synthesis | |
650 | 4 | |a Accelerating render | |
650 | 4 | |a Tensor decomposition | |
700 | 1 | |a Wan, Weibing |e verfasserin |0 (orcid)0000-0002-7092-9849 |4 aut | |
700 | 1 | |a Fang, Zhijun |e verfasserin |4 aut | |
700 | 1 | |a Zheng, Xiuyuan |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Computers & graphics |d Amsterdam [u.a.] : Elsevier Science, 1975 |g 116, Seite 491-499 |h Online-Ressource |w (DE-627)31622572X |w (DE-600)1499979-1 |w (DE-576)081984979 |7 nnns |
773 | 1 | 8 | |g volume:116 |g pages:491-499 |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SYSFLAG_U | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_101 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_150 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_187 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2001 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2008 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2190 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
912 | |a GBV_ILN_4700 | ||
936 | b | k | |a 54.73 |j Computergraphik |q VZ |
951 | |a AR | ||
952 | |d 116 |h 491-499 |
author_variant |
d l dl w w ww z f zf x z xz |
---|---|
matchkey_str |
liudezhiwanweibingfangzhijunzhengxiuyuan:2023----:sefatoevesnhssfyai |
hierarchy_sort_str |
2023 |
bklnumber |
54.73 |
publishDate |
2023 |
allfields |
10.1016/j.cag.2023.10.002 doi (DE-627)ELV066070643 (ELSEVIER)S0097-8493(23)00239-X DE-627 ger DE-627 rda eng 004 VZ 54.73 bkl Liu, Dezhi verfasserin (orcid)0009-0002-3386-625X aut GsNeRF: Fast novel view synthesis of dynamic radiance fields 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Synthesis of new views in dynamic 3D scenes is a challenging task in 3D vision. However, most current approaches rely on radiance fields built upon multi-view camera systems for dynamic scenes, which are expensive and time-consuming, and depend on implicit representations based on Neural Radiance Fields . In this study, we introduced a new representation method for dynamic scenes called GsNeRF, which allows for fast reconstruction of objects under motion by moving a single camera in the scene, and it can render high-quality views from arbitrary time frames and camera poses. We utilize five grids to represent the dynamic scene, employing tensor decomposition for each grid to reduce storage space usage. Since the primary task of the entire model is to optimize these planes, we enforce spatio-temporal continuity of these planes through a smoothness loss. GsNeRF is combined with a miniature MLP to regress color outputs and trained using volume rendering. Through testing on a series of synthetic and real datasets, our method reduces the training time by over 100 times compared to implicit methods and achieves better rendering quality compared to explicit methods. Our approach achieves a balance between memory usage, speed, and quality overall. Neural rendering Novel view synthesis Accelerating render Tensor decomposition Wan, Weibing verfasserin (orcid)0000-0002-7092-9849 aut Fang, Zhijun verfasserin aut Zheng, Xiuyuan verfasserin aut Enthalten in Computers & graphics Amsterdam [u.a.] : Elsevier Science, 1975 116, Seite 491-499 Online-Ressource (DE-627)31622572X (DE-600)1499979-1 (DE-576)081984979 nnns volume:116 pages:491-499 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.73 Computergraphik VZ AR 116 491-499 |
spelling |
10.1016/j.cag.2023.10.002 doi (DE-627)ELV066070643 (ELSEVIER)S0097-8493(23)00239-X DE-627 ger DE-627 rda eng 004 VZ 54.73 bkl Liu, Dezhi verfasserin (orcid)0009-0002-3386-625X aut GsNeRF: Fast novel view synthesis of dynamic radiance fields 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Synthesis of new views in dynamic 3D scenes is a challenging task in 3D vision. However, most current approaches rely on radiance fields built upon multi-view camera systems for dynamic scenes, which are expensive and time-consuming, and depend on implicit representations based on Neural Radiance Fields . In this study, we introduced a new representation method for dynamic scenes called GsNeRF, which allows for fast reconstruction of objects under motion by moving a single camera in the scene, and it can render high-quality views from arbitrary time frames and camera poses. We utilize five grids to represent the dynamic scene, employing tensor decomposition for each grid to reduce storage space usage. Since the primary task of the entire model is to optimize these planes, we enforce spatio-temporal continuity of these planes through a smoothness loss. GsNeRF is combined with a miniature MLP to regress color outputs and trained using volume rendering. Through testing on a series of synthetic and real datasets, our method reduces the training time by over 100 times compared to implicit methods and achieves better rendering quality compared to explicit methods. Our approach achieves a balance between memory usage, speed, and quality overall. Neural rendering Novel view synthesis Accelerating render Tensor decomposition Wan, Weibing verfasserin (orcid)0000-0002-7092-9849 aut Fang, Zhijun verfasserin aut Zheng, Xiuyuan verfasserin aut Enthalten in Computers & graphics Amsterdam [u.a.] : Elsevier Science, 1975 116, Seite 491-499 Online-Ressource (DE-627)31622572X (DE-600)1499979-1 (DE-576)081984979 nnns volume:116 pages:491-499 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.73 Computergraphik VZ AR 116 491-499 |
allfields_unstemmed |
10.1016/j.cag.2023.10.002 doi (DE-627)ELV066070643 (ELSEVIER)S0097-8493(23)00239-X DE-627 ger DE-627 rda eng 004 VZ 54.73 bkl Liu, Dezhi verfasserin (orcid)0009-0002-3386-625X aut GsNeRF: Fast novel view synthesis of dynamic radiance fields 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Synthesis of new views in dynamic 3D scenes is a challenging task in 3D vision. However, most current approaches rely on radiance fields built upon multi-view camera systems for dynamic scenes, which are expensive and time-consuming, and depend on implicit representations based on Neural Radiance Fields . In this study, we introduced a new representation method for dynamic scenes called GsNeRF, which allows for fast reconstruction of objects under motion by moving a single camera in the scene, and it can render high-quality views from arbitrary time frames and camera poses. We utilize five grids to represent the dynamic scene, employing tensor decomposition for each grid to reduce storage space usage. Since the primary task of the entire model is to optimize these planes, we enforce spatio-temporal continuity of these planes through a smoothness loss. GsNeRF is combined with a miniature MLP to regress color outputs and trained using volume rendering. Through testing on a series of synthetic and real datasets, our method reduces the training time by over 100 times compared to implicit methods and achieves better rendering quality compared to explicit methods. Our approach achieves a balance between memory usage, speed, and quality overall. Neural rendering Novel view synthesis Accelerating render Tensor decomposition Wan, Weibing verfasserin (orcid)0000-0002-7092-9849 aut Fang, Zhijun verfasserin aut Zheng, Xiuyuan verfasserin aut Enthalten in Computers & graphics Amsterdam [u.a.] : Elsevier Science, 1975 116, Seite 491-499 Online-Ressource (DE-627)31622572X (DE-600)1499979-1 (DE-576)081984979 nnns volume:116 pages:491-499 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.73 Computergraphik VZ AR 116 491-499 |
allfieldsGer |
10.1016/j.cag.2023.10.002 doi (DE-627)ELV066070643 (ELSEVIER)S0097-8493(23)00239-X DE-627 ger DE-627 rda eng 004 VZ 54.73 bkl Liu, Dezhi verfasserin (orcid)0009-0002-3386-625X aut GsNeRF: Fast novel view synthesis of dynamic radiance fields 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Synthesis of new views in dynamic 3D scenes is a challenging task in 3D vision. However, most current approaches rely on radiance fields built upon multi-view camera systems for dynamic scenes, which are expensive and time-consuming, and depend on implicit representations based on Neural Radiance Fields . In this study, we introduced a new representation method for dynamic scenes called GsNeRF, which allows for fast reconstruction of objects under motion by moving a single camera in the scene, and it can render high-quality views from arbitrary time frames and camera poses. We utilize five grids to represent the dynamic scene, employing tensor decomposition for each grid to reduce storage space usage. Since the primary task of the entire model is to optimize these planes, we enforce spatio-temporal continuity of these planes through a smoothness loss. GsNeRF is combined with a miniature MLP to regress color outputs and trained using volume rendering. Through testing on a series of synthetic and real datasets, our method reduces the training time by over 100 times compared to implicit methods and achieves better rendering quality compared to explicit methods. Our approach achieves a balance between memory usage, speed, and quality overall. Neural rendering Novel view synthesis Accelerating render Tensor decomposition Wan, Weibing verfasserin (orcid)0000-0002-7092-9849 aut Fang, Zhijun verfasserin aut Zheng, Xiuyuan verfasserin aut Enthalten in Computers & graphics Amsterdam [u.a.] : Elsevier Science, 1975 116, Seite 491-499 Online-Ressource (DE-627)31622572X (DE-600)1499979-1 (DE-576)081984979 nnns volume:116 pages:491-499 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.73 Computergraphik VZ AR 116 491-499 |
allfieldsSound |
10.1016/j.cag.2023.10.002 doi (DE-627)ELV066070643 (ELSEVIER)S0097-8493(23)00239-X DE-627 ger DE-627 rda eng 004 VZ 54.73 bkl Liu, Dezhi verfasserin (orcid)0009-0002-3386-625X aut GsNeRF: Fast novel view synthesis of dynamic radiance fields 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Synthesis of new views in dynamic 3D scenes is a challenging task in 3D vision. However, most current approaches rely on radiance fields built upon multi-view camera systems for dynamic scenes, which are expensive and time-consuming, and depend on implicit representations based on Neural Radiance Fields . In this study, we introduced a new representation method for dynamic scenes called GsNeRF, which allows for fast reconstruction of objects under motion by moving a single camera in the scene, and it can render high-quality views from arbitrary time frames and camera poses. We utilize five grids to represent the dynamic scene, employing tensor decomposition for each grid to reduce storage space usage. Since the primary task of the entire model is to optimize these planes, we enforce spatio-temporal continuity of these planes through a smoothness loss. GsNeRF is combined with a miniature MLP to regress color outputs and trained using volume rendering. Through testing on a series of synthetic and real datasets, our method reduces the training time by over 100 times compared to implicit methods and achieves better rendering quality compared to explicit methods. Our approach achieves a balance between memory usage, speed, and quality overall. Neural rendering Novel view synthesis Accelerating render Tensor decomposition Wan, Weibing verfasserin (orcid)0000-0002-7092-9849 aut Fang, Zhijun verfasserin aut Zheng, Xiuyuan verfasserin aut Enthalten in Computers & graphics Amsterdam [u.a.] : Elsevier Science, 1975 116, Seite 491-499 Online-Ressource (DE-627)31622572X (DE-600)1499979-1 (DE-576)081984979 nnns volume:116 pages:491-499 GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.73 Computergraphik VZ AR 116 491-499 |
language |
English |
source |
Enthalten in Computers & graphics 116, Seite 491-499 volume:116 pages:491-499 |
sourceStr |
Enthalten in Computers & graphics 116, Seite 491-499 volume:116 pages:491-499 |
format_phy_str_mv |
Article |
bklname |
Computergraphik |
institution |
findex.gbv.de |
topic_facet |
Neural rendering Novel view synthesis Accelerating render Tensor decomposition |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
Computers & graphics |
authorswithroles_txt_mv |
Liu, Dezhi @@aut@@ Wan, Weibing @@aut@@ Fang, Zhijun @@aut@@ Zheng, Xiuyuan @@aut@@ |
publishDateDaySort_date |
2023-01-01T00:00:00Z |
hierarchy_top_id |
31622572X |
dewey-sort |
14 |
id |
ELV066070643 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV066070643</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240112093052.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">231209s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.cag.2023.10.002</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV066070643</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0097-8493(23)00239-X</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.73</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Liu, Dezhi</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0009-0002-3386-625X</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">GsNeRF: Fast novel view synthesis of dynamic radiance fields</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Synthesis of new views in dynamic 3D scenes is a challenging task in 3D vision. However, most current approaches rely on radiance fields built upon multi-view camera systems for dynamic scenes, which are expensive and time-consuming, and depend on implicit representations based on Neural Radiance Fields . In this study, we introduced a new representation method for dynamic scenes called GsNeRF, which allows for fast reconstruction of objects under motion by moving a single camera in the scene, and it can render high-quality views from arbitrary time frames and camera poses. We utilize five grids to represent the dynamic scene, employing tensor decomposition for each grid to reduce storage space usage. Since the primary task of the entire model is to optimize these planes, we enforce spatio-temporal continuity of these planes through a smoothness loss. GsNeRF is combined with a miniature MLP to regress color outputs and trained using volume rendering. Through testing on a series of synthetic and real datasets, our method reduces the training time by over 100 times compared to implicit methods and achieves better rendering quality compared to explicit methods. Our approach achieves a balance between memory usage, speed, and quality overall.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Neural rendering</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Novel view synthesis</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Accelerating render</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Tensor decomposition</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wan, Weibing</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-7092-9849</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Fang, Zhijun</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zheng, Xiuyuan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Computers & graphics</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier Science, 1975</subfield><subfield code="g">116, Seite 491-499</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)31622572X</subfield><subfield code="w">(DE-600)1499979-1</subfield><subfield code="w">(DE-576)081984979</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:116</subfield><subfield code="g">pages:491-499</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.73</subfield><subfield code="j">Computergraphik</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">116</subfield><subfield code="h">491-499</subfield></datafield></record></collection>
|
author |
Liu, Dezhi |
spellingShingle |
Liu, Dezhi ddc 004 bkl 54.73 misc Neural rendering misc Novel view synthesis misc Accelerating render misc Tensor decomposition GsNeRF: Fast novel view synthesis of dynamic radiance fields |
authorStr |
Liu, Dezhi |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)31622572X |
format |
electronic Article |
dewey-ones |
004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
topic_title |
004 VZ 54.73 bkl GsNeRF: Fast novel view synthesis of dynamic radiance fields Neural rendering Novel view synthesis Accelerating render Tensor decomposition |
topic |
ddc 004 bkl 54.73 misc Neural rendering misc Novel view synthesis misc Accelerating render misc Tensor decomposition |
topic_unstemmed |
ddc 004 bkl 54.73 misc Neural rendering misc Novel view synthesis misc Accelerating render misc Tensor decomposition |
topic_browse |
ddc 004 bkl 54.73 misc Neural rendering misc Novel view synthesis misc Accelerating render misc Tensor decomposition |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Computers & graphics |
hierarchy_parent_id |
31622572X |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
Computers & graphics |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)31622572X (DE-600)1499979-1 (DE-576)081984979 |
title |
GsNeRF: Fast novel view synthesis of dynamic radiance fields |
ctrlnum |
(DE-627)ELV066070643 (ELSEVIER)S0097-8493(23)00239-X |
title_full |
GsNeRF: Fast novel view synthesis of dynamic radiance fields |
author_sort |
Liu, Dezhi |
journal |
Computers & graphics |
journalStr |
Computers & graphics |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2023 |
contenttype_str_mv |
zzz |
container_start_page |
491 |
author_browse |
Liu, Dezhi Wan, Weibing Fang, Zhijun Zheng, Xiuyuan |
container_volume |
116 |
class |
004 VZ 54.73 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Liu, Dezhi |
doi_str_mv |
10.1016/j.cag.2023.10.002 |
normlink |
(ORCID)0009-0002-3386-625X (ORCID)0000-0002-7092-9849 |
normlink_prefix_str_mv |
(orcid)0009-0002-3386-625X (orcid)0000-0002-7092-9849 |
dewey-full |
004 |
author2-role |
verfasserin |
title_sort |
gsnerf: fast novel view synthesis of dynamic radiance fields |
title_auth |
GsNeRF: Fast novel view synthesis of dynamic radiance fields |
abstract |
Synthesis of new views in dynamic 3D scenes is a challenging task in 3D vision. However, most current approaches rely on radiance fields built upon multi-view camera systems for dynamic scenes, which are expensive and time-consuming, and depend on implicit representations based on Neural Radiance Fields . In this study, we introduced a new representation method for dynamic scenes called GsNeRF, which allows for fast reconstruction of objects under motion by moving a single camera in the scene, and it can render high-quality views from arbitrary time frames and camera poses. We utilize five grids to represent the dynamic scene, employing tensor decomposition for each grid to reduce storage space usage. Since the primary task of the entire model is to optimize these planes, we enforce spatio-temporal continuity of these planes through a smoothness loss. GsNeRF is combined with a miniature MLP to regress color outputs and trained using volume rendering. Through testing on a series of synthetic and real datasets, our method reduces the training time by over 100 times compared to implicit methods and achieves better rendering quality compared to explicit methods. Our approach achieves a balance between memory usage, speed, and quality overall. |
abstractGer |
Synthesis of new views in dynamic 3D scenes is a challenging task in 3D vision. However, most current approaches rely on radiance fields built upon multi-view camera systems for dynamic scenes, which are expensive and time-consuming, and depend on implicit representations based on Neural Radiance Fields . In this study, we introduced a new representation method for dynamic scenes called GsNeRF, which allows for fast reconstruction of objects under motion by moving a single camera in the scene, and it can render high-quality views from arbitrary time frames and camera poses. We utilize five grids to represent the dynamic scene, employing tensor decomposition for each grid to reduce storage space usage. Since the primary task of the entire model is to optimize these planes, we enforce spatio-temporal continuity of these planes through a smoothness loss. GsNeRF is combined with a miniature MLP to regress color outputs and trained using volume rendering. Through testing on a series of synthetic and real datasets, our method reduces the training time by over 100 times compared to implicit methods and achieves better rendering quality compared to explicit methods. Our approach achieves a balance between memory usage, speed, and quality overall. |
abstract_unstemmed |
Synthesis of new views in dynamic 3D scenes is a challenging task in 3D vision. However, most current approaches rely on radiance fields built upon multi-view camera systems for dynamic scenes, which are expensive and time-consuming, and depend on implicit representations based on Neural Radiance Fields . In this study, we introduced a new representation method for dynamic scenes called GsNeRF, which allows for fast reconstruction of objects under motion by moving a single camera in the scene, and it can render high-quality views from arbitrary time frames and camera poses. We utilize five grids to represent the dynamic scene, employing tensor decomposition for each grid to reduce storage space usage. Since the primary task of the entire model is to optimize these planes, we enforce spatio-temporal continuity of these planes through a smoothness loss. GsNeRF is combined with a miniature MLP to regress color outputs and trained using volume rendering. Through testing on a series of synthetic and real datasets, our method reduces the training time by over 100 times compared to implicit methods and achieves better rendering quality compared to explicit methods. Our approach achieves a balance between memory usage, speed, and quality overall. |
collection_details |
GBV_USEFLAG_U GBV_ELV SYSFLAG_U GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 |
title_short |
GsNeRF: Fast novel view synthesis of dynamic radiance fields |
remote_bool |
true |
author2 |
Wan, Weibing Fang, Zhijun Zheng, Xiuyuan |
author2Str |
Wan, Weibing Fang, Zhijun Zheng, Xiuyuan |
ppnlink |
31622572X |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1016/j.cag.2023.10.002 |
up_date |
2024-07-07T01:13:43.005Z |
_version_ |
1803880835850436608 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV066070643</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240112093052.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">231209s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.cag.2023.10.002</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV066070643</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0097-8493(23)00239-X</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.73</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Liu, Dezhi</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0009-0002-3386-625X</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">GsNeRF: Fast novel view synthesis of dynamic radiance fields</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Synthesis of new views in dynamic 3D scenes is a challenging task in 3D vision. However, most current approaches rely on radiance fields built upon multi-view camera systems for dynamic scenes, which are expensive and time-consuming, and depend on implicit representations based on Neural Radiance Fields . In this study, we introduced a new representation method for dynamic scenes called GsNeRF, which allows for fast reconstruction of objects under motion by moving a single camera in the scene, and it can render high-quality views from arbitrary time frames and camera poses. We utilize five grids to represent the dynamic scene, employing tensor decomposition for each grid to reduce storage space usage. Since the primary task of the entire model is to optimize these planes, we enforce spatio-temporal continuity of these planes through a smoothness loss. GsNeRF is combined with a miniature MLP to regress color outputs and trained using volume rendering. Through testing on a series of synthetic and real datasets, our method reduces the training time by over 100 times compared to implicit methods and achieves better rendering quality compared to explicit methods. Our approach achieves a balance between memory usage, speed, and quality overall.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Neural rendering</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Novel view synthesis</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Accelerating render</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Tensor decomposition</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wan, Weibing</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-7092-9849</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Fang, Zhijun</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zheng, Xiuyuan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Computers & graphics</subfield><subfield code="d">Amsterdam [u.a.] : Elsevier Science, 1975</subfield><subfield code="g">116, Seite 491-499</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)31622572X</subfield><subfield code="w">(DE-600)1499979-1</subfield><subfield code="w">(DE-576)081984979</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:116</subfield><subfield code="g">pages:491-499</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.73</subfield><subfield code="j">Computergraphik</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">116</subfield><subfield code="h">491-499</subfield></datafield></record></collection>
|
score |
7.4018373 |