Recognizing Interactive Group Activities Using Temporal Interaction Matrices and Their Riemannian Statistics
Abstract While video-based activity analysis and recognition has received much attention, a large body of existing work deals with activities of a single subject. Modeling and recognition of coordinated multi-subject activities, or group activities, present in a variety of applications such as surve...
Ausführliche Beschreibung
Autor*in: |
Li, Ruonan [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2012 |
---|
Schlagwörter: |
---|
Anmerkung: |
© Springer Science+Business Media, LLC 2012 |
---|
Übergeordnetes Werk: |
Enthalten in: International journal of computer vision - Springer US, 1987, 101(2012), 2 vom: 21. Sept., Seite 305-328 |
---|---|
Übergeordnetes Werk: |
volume:101 ; year:2012 ; number:2 ; day:21 ; month:09 ; pages:305-328 |
Links: |
---|
DOI / URN: |
10.1007/s11263-012-0573-0 |
---|
Katalog-ID: |
OLC2057747034 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | OLC2057747034 | ||
003 | DE-627 | ||
005 | 20230504072135.0 | ||
007 | tu | ||
008 | 200819s2012 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s11263-012-0573-0 |2 doi | |
035 | |a (DE-627)OLC2057747034 | ||
035 | |a (DE-He213)s11263-012-0573-0-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |q VZ |
100 | 1 | |a Li, Ruonan |e verfasserin |4 aut | |
245 | 1 | 0 | |a Recognizing Interactive Group Activities Using Temporal Interaction Matrices and Their Riemannian Statistics |
264 | 1 | |c 2012 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © Springer Science+Business Media, LLC 2012 | ||
520 | |a Abstract While video-based activity analysis and recognition has received much attention, a large body of existing work deals with activities of a single subject. Modeling and recognition of coordinated multi-subject activities, or group activities, present in a variety of applications such as surveillance, sports, and biological monitoring records, etc., is the main objective of this paper. Unlike earlier attempts which model the complex spatial temporal constraints among multiple subjects with a parametric Bayesian network, we propose a compact and discriminative descriptor referred to as the Temporal Interaction Matrix for representing a coordinated group motion pattern. Moreover, we characterize the space of the Temporal Interaction Matrices using the Discriminative Temporal Interaction Manifold (DTIM), and use it as a framework within which we develop a data-driven strategy to characterize the group motion pattern without employing specific domain knowledge. In particular, we establish probability densities on the DTIM for compactly describing the statistical properties of the coordinations and interactions among multiple subjects in a group activity. For each class of group activity, we learn a multi-modal density function on the DTIM. A Maximum a Posteriori (MAP) classifier on the manifold is then designed for recognizing new activities. In addition, we have extended this model to one with which we can explicitly distinguish the participants from non-participants. We demonstrate how the framework can be applied to motions represented by point trajectories as well as articulated human actions represented by images. Experiments on both cases show the effectiveness of the proposed approach. | ||
650 | 4 | |a Event analysis | |
650 | 4 | |a Activity recognition | |
700 | 1 | |a Chellappa, Rama |4 aut | |
700 | 1 | |a Zhou, Shaohua Kevin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t International journal of computer vision |d Springer US, 1987 |g 101(2012), 2 vom: 21. Sept., Seite 305-328 |w (DE-627)129354252 |w (DE-600)155895-X |w (DE-576)018081428 |x 0920-5691 |7 nnns |
773 | 1 | 8 | |g volume:101 |g year:2012 |g number:2 |g day:21 |g month:09 |g pages:305-328 |
856 | 4 | 1 | |u https://doi.org/10.1007/s11263-012-0573-0 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-MAT | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2006 | ||
912 | |a GBV_ILN_2012 | ||
912 | |a GBV_ILN_2244 | ||
912 | |a GBV_ILN_4046 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 101 |j 2012 |e 2 |b 21 |c 09 |h 305-328 |
author_variant |
r l rl r c rc s k z sk skz |
---|---|
matchkey_str |
article:09205691:2012----::eonznitrcieruatvteuigeprlneatomtiead |
hierarchy_sort_str |
2012 |
publishDate |
2012 |
allfields |
10.1007/s11263-012-0573-0 doi (DE-627)OLC2057747034 (DE-He213)s11263-012-0573-0-p DE-627 ger DE-627 rakwb eng 004 VZ Li, Ruonan verfasserin aut Recognizing Interactive Group Activities Using Temporal Interaction Matrices and Their Riemannian Statistics 2012 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC 2012 Abstract While video-based activity analysis and recognition has received much attention, a large body of existing work deals with activities of a single subject. Modeling and recognition of coordinated multi-subject activities, or group activities, present in a variety of applications such as surveillance, sports, and biological monitoring records, etc., is the main objective of this paper. Unlike earlier attempts which model the complex spatial temporal constraints among multiple subjects with a parametric Bayesian network, we propose a compact and discriminative descriptor referred to as the Temporal Interaction Matrix for representing a coordinated group motion pattern. Moreover, we characterize the space of the Temporal Interaction Matrices using the Discriminative Temporal Interaction Manifold (DTIM), and use it as a framework within which we develop a data-driven strategy to characterize the group motion pattern without employing specific domain knowledge. In particular, we establish probability densities on the DTIM for compactly describing the statistical properties of the coordinations and interactions among multiple subjects in a group activity. For each class of group activity, we learn a multi-modal density function on the DTIM. A Maximum a Posteriori (MAP) classifier on the manifold is then designed for recognizing new activities. In addition, we have extended this model to one with which we can explicitly distinguish the participants from non-participants. We demonstrate how the framework can be applied to motions represented by point trajectories as well as articulated human actions represented by images. Experiments on both cases show the effectiveness of the proposed approach. Event analysis Activity recognition Chellappa, Rama aut Zhou, Shaohua Kevin aut Enthalten in International journal of computer vision Springer US, 1987 101(2012), 2 vom: 21. Sept., Seite 305-328 (DE-627)129354252 (DE-600)155895-X (DE-576)018081428 0920-5691 nnns volume:101 year:2012 number:2 day:21 month:09 pages:305-328 https://doi.org/10.1007/s11263-012-0573-0 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_22 GBV_ILN_24 GBV_ILN_70 GBV_ILN_2004 GBV_ILN_2006 GBV_ILN_2012 GBV_ILN_2244 GBV_ILN_4046 GBV_ILN_4700 AR 101 2012 2 21 09 305-328 |
spelling |
10.1007/s11263-012-0573-0 doi (DE-627)OLC2057747034 (DE-He213)s11263-012-0573-0-p DE-627 ger DE-627 rakwb eng 004 VZ Li, Ruonan verfasserin aut Recognizing Interactive Group Activities Using Temporal Interaction Matrices and Their Riemannian Statistics 2012 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC 2012 Abstract While video-based activity analysis and recognition has received much attention, a large body of existing work deals with activities of a single subject. Modeling and recognition of coordinated multi-subject activities, or group activities, present in a variety of applications such as surveillance, sports, and biological monitoring records, etc., is the main objective of this paper. Unlike earlier attempts which model the complex spatial temporal constraints among multiple subjects with a parametric Bayesian network, we propose a compact and discriminative descriptor referred to as the Temporal Interaction Matrix for representing a coordinated group motion pattern. Moreover, we characterize the space of the Temporal Interaction Matrices using the Discriminative Temporal Interaction Manifold (DTIM), and use it as a framework within which we develop a data-driven strategy to characterize the group motion pattern without employing specific domain knowledge. In particular, we establish probability densities on the DTIM for compactly describing the statistical properties of the coordinations and interactions among multiple subjects in a group activity. For each class of group activity, we learn a multi-modal density function on the DTIM. A Maximum a Posteriori (MAP) classifier on the manifold is then designed for recognizing new activities. In addition, we have extended this model to one with which we can explicitly distinguish the participants from non-participants. We demonstrate how the framework can be applied to motions represented by point trajectories as well as articulated human actions represented by images. Experiments on both cases show the effectiveness of the proposed approach. Event analysis Activity recognition Chellappa, Rama aut Zhou, Shaohua Kevin aut Enthalten in International journal of computer vision Springer US, 1987 101(2012), 2 vom: 21. Sept., Seite 305-328 (DE-627)129354252 (DE-600)155895-X (DE-576)018081428 0920-5691 nnns volume:101 year:2012 number:2 day:21 month:09 pages:305-328 https://doi.org/10.1007/s11263-012-0573-0 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_22 GBV_ILN_24 GBV_ILN_70 GBV_ILN_2004 GBV_ILN_2006 GBV_ILN_2012 GBV_ILN_2244 GBV_ILN_4046 GBV_ILN_4700 AR 101 2012 2 21 09 305-328 |
allfields_unstemmed |
10.1007/s11263-012-0573-0 doi (DE-627)OLC2057747034 (DE-He213)s11263-012-0573-0-p DE-627 ger DE-627 rakwb eng 004 VZ Li, Ruonan verfasserin aut Recognizing Interactive Group Activities Using Temporal Interaction Matrices and Their Riemannian Statistics 2012 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC 2012 Abstract While video-based activity analysis and recognition has received much attention, a large body of existing work deals with activities of a single subject. Modeling and recognition of coordinated multi-subject activities, or group activities, present in a variety of applications such as surveillance, sports, and biological monitoring records, etc., is the main objective of this paper. Unlike earlier attempts which model the complex spatial temporal constraints among multiple subjects with a parametric Bayesian network, we propose a compact and discriminative descriptor referred to as the Temporal Interaction Matrix for representing a coordinated group motion pattern. Moreover, we characterize the space of the Temporal Interaction Matrices using the Discriminative Temporal Interaction Manifold (DTIM), and use it as a framework within which we develop a data-driven strategy to characterize the group motion pattern without employing specific domain knowledge. In particular, we establish probability densities on the DTIM for compactly describing the statistical properties of the coordinations and interactions among multiple subjects in a group activity. For each class of group activity, we learn a multi-modal density function on the DTIM. A Maximum a Posteriori (MAP) classifier on the manifold is then designed for recognizing new activities. In addition, we have extended this model to one with which we can explicitly distinguish the participants from non-participants. We demonstrate how the framework can be applied to motions represented by point trajectories as well as articulated human actions represented by images. Experiments on both cases show the effectiveness of the proposed approach. Event analysis Activity recognition Chellappa, Rama aut Zhou, Shaohua Kevin aut Enthalten in International journal of computer vision Springer US, 1987 101(2012), 2 vom: 21. Sept., Seite 305-328 (DE-627)129354252 (DE-600)155895-X (DE-576)018081428 0920-5691 nnns volume:101 year:2012 number:2 day:21 month:09 pages:305-328 https://doi.org/10.1007/s11263-012-0573-0 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_22 GBV_ILN_24 GBV_ILN_70 GBV_ILN_2004 GBV_ILN_2006 GBV_ILN_2012 GBV_ILN_2244 GBV_ILN_4046 GBV_ILN_4700 AR 101 2012 2 21 09 305-328 |
allfieldsGer |
10.1007/s11263-012-0573-0 doi (DE-627)OLC2057747034 (DE-He213)s11263-012-0573-0-p DE-627 ger DE-627 rakwb eng 004 VZ Li, Ruonan verfasserin aut Recognizing Interactive Group Activities Using Temporal Interaction Matrices and Their Riemannian Statistics 2012 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC 2012 Abstract While video-based activity analysis and recognition has received much attention, a large body of existing work deals with activities of a single subject. Modeling and recognition of coordinated multi-subject activities, or group activities, present in a variety of applications such as surveillance, sports, and biological monitoring records, etc., is the main objective of this paper. Unlike earlier attempts which model the complex spatial temporal constraints among multiple subjects with a parametric Bayesian network, we propose a compact and discriminative descriptor referred to as the Temporal Interaction Matrix for representing a coordinated group motion pattern. Moreover, we characterize the space of the Temporal Interaction Matrices using the Discriminative Temporal Interaction Manifold (DTIM), and use it as a framework within which we develop a data-driven strategy to characterize the group motion pattern without employing specific domain knowledge. In particular, we establish probability densities on the DTIM for compactly describing the statistical properties of the coordinations and interactions among multiple subjects in a group activity. For each class of group activity, we learn a multi-modal density function on the DTIM. A Maximum a Posteriori (MAP) classifier on the manifold is then designed for recognizing new activities. In addition, we have extended this model to one with which we can explicitly distinguish the participants from non-participants. We demonstrate how the framework can be applied to motions represented by point trajectories as well as articulated human actions represented by images. Experiments on both cases show the effectiveness of the proposed approach. Event analysis Activity recognition Chellappa, Rama aut Zhou, Shaohua Kevin aut Enthalten in International journal of computer vision Springer US, 1987 101(2012), 2 vom: 21. Sept., Seite 305-328 (DE-627)129354252 (DE-600)155895-X (DE-576)018081428 0920-5691 nnns volume:101 year:2012 number:2 day:21 month:09 pages:305-328 https://doi.org/10.1007/s11263-012-0573-0 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_22 GBV_ILN_24 GBV_ILN_70 GBV_ILN_2004 GBV_ILN_2006 GBV_ILN_2012 GBV_ILN_2244 GBV_ILN_4046 GBV_ILN_4700 AR 101 2012 2 21 09 305-328 |
allfieldsSound |
10.1007/s11263-012-0573-0 doi (DE-627)OLC2057747034 (DE-He213)s11263-012-0573-0-p DE-627 ger DE-627 rakwb eng 004 VZ Li, Ruonan verfasserin aut Recognizing Interactive Group Activities Using Temporal Interaction Matrices and Their Riemannian Statistics 2012 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC 2012 Abstract While video-based activity analysis and recognition has received much attention, a large body of existing work deals with activities of a single subject. Modeling and recognition of coordinated multi-subject activities, or group activities, present in a variety of applications such as surveillance, sports, and biological monitoring records, etc., is the main objective of this paper. Unlike earlier attempts which model the complex spatial temporal constraints among multiple subjects with a parametric Bayesian network, we propose a compact and discriminative descriptor referred to as the Temporal Interaction Matrix for representing a coordinated group motion pattern. Moreover, we characterize the space of the Temporal Interaction Matrices using the Discriminative Temporal Interaction Manifold (DTIM), and use it as a framework within which we develop a data-driven strategy to characterize the group motion pattern without employing specific domain knowledge. In particular, we establish probability densities on the DTIM for compactly describing the statistical properties of the coordinations and interactions among multiple subjects in a group activity. For each class of group activity, we learn a multi-modal density function on the DTIM. A Maximum a Posteriori (MAP) classifier on the manifold is then designed for recognizing new activities. In addition, we have extended this model to one with which we can explicitly distinguish the participants from non-participants. We demonstrate how the framework can be applied to motions represented by point trajectories as well as articulated human actions represented by images. Experiments on both cases show the effectiveness of the proposed approach. Event analysis Activity recognition Chellappa, Rama aut Zhou, Shaohua Kevin aut Enthalten in International journal of computer vision Springer US, 1987 101(2012), 2 vom: 21. Sept., Seite 305-328 (DE-627)129354252 (DE-600)155895-X (DE-576)018081428 0920-5691 nnns volume:101 year:2012 number:2 day:21 month:09 pages:305-328 https://doi.org/10.1007/s11263-012-0573-0 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_22 GBV_ILN_24 GBV_ILN_70 GBV_ILN_2004 GBV_ILN_2006 GBV_ILN_2012 GBV_ILN_2244 GBV_ILN_4046 GBV_ILN_4700 AR 101 2012 2 21 09 305-328 |
language |
English |
source |
Enthalten in International journal of computer vision 101(2012), 2 vom: 21. Sept., Seite 305-328 volume:101 year:2012 number:2 day:21 month:09 pages:305-328 |
sourceStr |
Enthalten in International journal of computer vision 101(2012), 2 vom: 21. Sept., Seite 305-328 volume:101 year:2012 number:2 day:21 month:09 pages:305-328 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Event analysis Activity recognition |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
International journal of computer vision |
authorswithroles_txt_mv |
Li, Ruonan @@aut@@ Chellappa, Rama @@aut@@ Zhou, Shaohua Kevin @@aut@@ |
publishDateDaySort_date |
2012-09-21T00:00:00Z |
hierarchy_top_id |
129354252 |
dewey-sort |
14 |
id |
OLC2057747034 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2057747034</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230504072135.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200819s2012 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11263-012-0573-0</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2057747034</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11263-012-0573-0-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Li, Ruonan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Recognizing Interactive Group Activities Using Temporal Interaction Matrices and Their Riemannian Statistics</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2012</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media, LLC 2012</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract While video-based activity analysis and recognition has received much attention, a large body of existing work deals with activities of a single subject. Modeling and recognition of coordinated multi-subject activities, or group activities, present in a variety of applications such as surveillance, sports, and biological monitoring records, etc., is the main objective of this paper. Unlike earlier attempts which model the complex spatial temporal constraints among multiple subjects with a parametric Bayesian network, we propose a compact and discriminative descriptor referred to as the Temporal Interaction Matrix for representing a coordinated group motion pattern. Moreover, we characterize the space of the Temporal Interaction Matrices using the Discriminative Temporal Interaction Manifold (DTIM), and use it as a framework within which we develop a data-driven strategy to characterize the group motion pattern without employing specific domain knowledge. In particular, we establish probability densities on the DTIM for compactly describing the statistical properties of the coordinations and interactions among multiple subjects in a group activity. For each class of group activity, we learn a multi-modal density function on the DTIM. A Maximum a Posteriori (MAP) classifier on the manifold is then designed for recognizing new activities. In addition, we have extended this model to one with which we can explicitly distinguish the participants from non-participants. We demonstrate how the framework can be applied to motions represented by point trajectories as well as articulated human actions represented by images. Experiments on both cases show the effectiveness of the proposed approach.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Event analysis</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Activity recognition</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Chellappa, Rama</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhou, Shaohua Kevin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">International journal of computer vision</subfield><subfield code="d">Springer US, 1987</subfield><subfield code="g">101(2012), 2 vom: 21. Sept., Seite 305-328</subfield><subfield code="w">(DE-627)129354252</subfield><subfield code="w">(DE-600)155895-X</subfield><subfield code="w">(DE-576)018081428</subfield><subfield code="x">0920-5691</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:101</subfield><subfield code="g">year:2012</subfield><subfield code="g">number:2</subfield><subfield code="g">day:21</subfield><subfield code="g">month:09</subfield><subfield code="g">pages:305-328</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11263-012-0573-0</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2244</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">101</subfield><subfield code="j">2012</subfield><subfield code="e">2</subfield><subfield code="b">21</subfield><subfield code="c">09</subfield><subfield code="h">305-328</subfield></datafield></record></collection>
|
author |
Li, Ruonan |
spellingShingle |
Li, Ruonan ddc 004 misc Event analysis misc Activity recognition Recognizing Interactive Group Activities Using Temporal Interaction Matrices and Their Riemannian Statistics |
authorStr |
Li, Ruonan |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)129354252 |
format |
Article |
dewey-ones |
004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
0920-5691 |
topic_title |
004 VZ Recognizing Interactive Group Activities Using Temporal Interaction Matrices and Their Riemannian Statistics Event analysis Activity recognition |
topic |
ddc 004 misc Event analysis misc Activity recognition |
topic_unstemmed |
ddc 004 misc Event analysis misc Activity recognition |
topic_browse |
ddc 004 misc Event analysis misc Activity recognition |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
International journal of computer vision |
hierarchy_parent_id |
129354252 |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
International journal of computer vision |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)129354252 (DE-600)155895-X (DE-576)018081428 |
title |
Recognizing Interactive Group Activities Using Temporal Interaction Matrices and Their Riemannian Statistics |
ctrlnum |
(DE-627)OLC2057747034 (DE-He213)s11263-012-0573-0-p |
title_full |
Recognizing Interactive Group Activities Using Temporal Interaction Matrices and Their Riemannian Statistics |
author_sort |
Li, Ruonan |
journal |
International journal of computer vision |
journalStr |
International journal of computer vision |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2012 |
contenttype_str_mv |
txt |
container_start_page |
305 |
author_browse |
Li, Ruonan Chellappa, Rama Zhou, Shaohua Kevin |
container_volume |
101 |
class |
004 VZ |
format_se |
Aufsätze |
author-letter |
Li, Ruonan |
doi_str_mv |
10.1007/s11263-012-0573-0 |
dewey-full |
004 |
title_sort |
recognizing interactive group activities using temporal interaction matrices and their riemannian statistics |
title_auth |
Recognizing Interactive Group Activities Using Temporal Interaction Matrices and Their Riemannian Statistics |
abstract |
Abstract While video-based activity analysis and recognition has received much attention, a large body of existing work deals with activities of a single subject. Modeling and recognition of coordinated multi-subject activities, or group activities, present in a variety of applications such as surveillance, sports, and biological monitoring records, etc., is the main objective of this paper. Unlike earlier attempts which model the complex spatial temporal constraints among multiple subjects with a parametric Bayesian network, we propose a compact and discriminative descriptor referred to as the Temporal Interaction Matrix for representing a coordinated group motion pattern. Moreover, we characterize the space of the Temporal Interaction Matrices using the Discriminative Temporal Interaction Manifold (DTIM), and use it as a framework within which we develop a data-driven strategy to characterize the group motion pattern without employing specific domain knowledge. In particular, we establish probability densities on the DTIM for compactly describing the statistical properties of the coordinations and interactions among multiple subjects in a group activity. For each class of group activity, we learn a multi-modal density function on the DTIM. A Maximum a Posteriori (MAP) classifier on the manifold is then designed for recognizing new activities. In addition, we have extended this model to one with which we can explicitly distinguish the participants from non-participants. We demonstrate how the framework can be applied to motions represented by point trajectories as well as articulated human actions represented by images. Experiments on both cases show the effectiveness of the proposed approach. © Springer Science+Business Media, LLC 2012 |
abstractGer |
Abstract While video-based activity analysis and recognition has received much attention, a large body of existing work deals with activities of a single subject. Modeling and recognition of coordinated multi-subject activities, or group activities, present in a variety of applications such as surveillance, sports, and biological monitoring records, etc., is the main objective of this paper. Unlike earlier attempts which model the complex spatial temporal constraints among multiple subjects with a parametric Bayesian network, we propose a compact and discriminative descriptor referred to as the Temporal Interaction Matrix for representing a coordinated group motion pattern. Moreover, we characterize the space of the Temporal Interaction Matrices using the Discriminative Temporal Interaction Manifold (DTIM), and use it as a framework within which we develop a data-driven strategy to characterize the group motion pattern without employing specific domain knowledge. In particular, we establish probability densities on the DTIM for compactly describing the statistical properties of the coordinations and interactions among multiple subjects in a group activity. For each class of group activity, we learn a multi-modal density function on the DTIM. A Maximum a Posteriori (MAP) classifier on the manifold is then designed for recognizing new activities. In addition, we have extended this model to one with which we can explicitly distinguish the participants from non-participants. We demonstrate how the framework can be applied to motions represented by point trajectories as well as articulated human actions represented by images. Experiments on both cases show the effectiveness of the proposed approach. © Springer Science+Business Media, LLC 2012 |
abstract_unstemmed |
Abstract While video-based activity analysis and recognition has received much attention, a large body of existing work deals with activities of a single subject. Modeling and recognition of coordinated multi-subject activities, or group activities, present in a variety of applications such as surveillance, sports, and biological monitoring records, etc., is the main objective of this paper. Unlike earlier attempts which model the complex spatial temporal constraints among multiple subjects with a parametric Bayesian network, we propose a compact and discriminative descriptor referred to as the Temporal Interaction Matrix for representing a coordinated group motion pattern. Moreover, we characterize the space of the Temporal Interaction Matrices using the Discriminative Temporal Interaction Manifold (DTIM), and use it as a framework within which we develop a data-driven strategy to characterize the group motion pattern without employing specific domain knowledge. In particular, we establish probability densities on the DTIM for compactly describing the statistical properties of the coordinations and interactions among multiple subjects in a group activity. For each class of group activity, we learn a multi-modal density function on the DTIM. A Maximum a Posteriori (MAP) classifier on the manifold is then designed for recognizing new activities. In addition, we have extended this model to one with which we can explicitly distinguish the participants from non-participants. We demonstrate how the framework can be applied to motions represented by point trajectories as well as articulated human actions represented by images. Experiments on both cases show the effectiveness of the proposed approach. © Springer Science+Business Media, LLC 2012 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_22 GBV_ILN_24 GBV_ILN_70 GBV_ILN_2004 GBV_ILN_2006 GBV_ILN_2012 GBV_ILN_2244 GBV_ILN_4046 GBV_ILN_4700 |
container_issue |
2 |
title_short |
Recognizing Interactive Group Activities Using Temporal Interaction Matrices and Their Riemannian Statistics |
url |
https://doi.org/10.1007/s11263-012-0573-0 |
remote_bool |
false |
author2 |
Chellappa, Rama Zhou, Shaohua Kevin |
author2Str |
Chellappa, Rama Zhou, Shaohua Kevin |
ppnlink |
129354252 |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s11263-012-0573-0 |
up_date |
2024-07-03T16:08:44.942Z |
_version_ |
1803574758545031168 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2057747034</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230504072135.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200819s2012 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11263-012-0573-0</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2057747034</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11263-012-0573-0-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Li, Ruonan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Recognizing Interactive Group Activities Using Temporal Interaction Matrices and Their Riemannian Statistics</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2012</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media, LLC 2012</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract While video-based activity analysis and recognition has received much attention, a large body of existing work deals with activities of a single subject. Modeling and recognition of coordinated multi-subject activities, or group activities, present in a variety of applications such as surveillance, sports, and biological monitoring records, etc., is the main objective of this paper. Unlike earlier attempts which model the complex spatial temporal constraints among multiple subjects with a parametric Bayesian network, we propose a compact and discriminative descriptor referred to as the Temporal Interaction Matrix for representing a coordinated group motion pattern. Moreover, we characterize the space of the Temporal Interaction Matrices using the Discriminative Temporal Interaction Manifold (DTIM), and use it as a framework within which we develop a data-driven strategy to characterize the group motion pattern without employing specific domain knowledge. In particular, we establish probability densities on the DTIM for compactly describing the statistical properties of the coordinations and interactions among multiple subjects in a group activity. For each class of group activity, we learn a multi-modal density function on the DTIM. A Maximum a Posteriori (MAP) classifier on the manifold is then designed for recognizing new activities. In addition, we have extended this model to one with which we can explicitly distinguish the participants from non-participants. We demonstrate how the framework can be applied to motions represented by point trajectories as well as articulated human actions represented by images. Experiments on both cases show the effectiveness of the proposed approach.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Event analysis</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Activity recognition</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Chellappa, Rama</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhou, Shaohua Kevin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">International journal of computer vision</subfield><subfield code="d">Springer US, 1987</subfield><subfield code="g">101(2012), 2 vom: 21. Sept., Seite 305-328</subfield><subfield code="w">(DE-627)129354252</subfield><subfield code="w">(DE-600)155895-X</subfield><subfield code="w">(DE-576)018081428</subfield><subfield code="x">0920-5691</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:101</subfield><subfield code="g">year:2012</subfield><subfield code="g">number:2</subfield><subfield code="g">day:21</subfield><subfield code="g">month:09</subfield><subfield code="g">pages:305-328</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11263-012-0573-0</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2244</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">101</subfield><subfield code="j">2012</subfield><subfield code="e">2</subfield><subfield code="b">21</subfield><subfield code="c">09</subfield><subfield code="h">305-328</subfield></datafield></record></collection>
|
score |
7.3976746 |