Introduction to the special issue on auditory-visual expressive speech and gesture in humans and machines
We speak to express ourselves. Sometimes words can capture what we mean; sometimes we mean more than can be said. This is where our visible gestures - those dynamic oscillations of our gaze, face, head, hand, arms and bodies – help. Not only do these co-verbal visual signals help express our intenti...
Ausführliche Beschreibung
Autor*in: |
Kim, Jeesun [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2018transfer abstract |
---|
Umfang: |
5 |
---|
Übergeordnetes Werk: |
Enthalten in: Comparison of dosing algorithms for acenocoumarol and phenprocoumon using clinical factors with the standard care in the Netherlands - Zhang, Yumao ELSEVIER, 2015, an interdisciplinary journal, Amsterdam |
---|---|
Übergeordnetes Werk: |
volume:98 ; year:2018 ; pages:63-67 ; extent:5 |
Links: |
---|
DOI / URN: |
10.1016/j.specom.2018.02.001 |
---|
Katalog-ID: |
ELV042303885 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV042303885 | ||
003 | DE-627 | ||
005 | 20230626001056.0 | ||
007 | cr uuu---uuuuu | ||
008 | 180726s2018 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.specom.2018.02.001 |2 doi | |
028 | 5 | 2 | |a GBV00000000000165A.pica |
035 | |a (DE-627)ELV042303885 | ||
035 | |a (ELSEVIER)S0167-6393(18)30021-9 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | |a 070 |a 400 | |
082 | 0 | 4 | |a 070 |q DE-600 |
082 | 0 | 4 | |a 400 |q DE-600 |
082 | 0 | 4 | |a 610 |q VZ |
082 | 0 | 4 | |a 610 |q VZ |
082 | 0 | 4 | |a 610 |q VZ |
084 | |a 44.40 |2 bkl | ||
100 | 1 | |a Kim, Jeesun |e verfasserin |4 aut | |
245 | 1 | 0 | |a Introduction to the special issue on auditory-visual expressive speech and gesture in humans and machines |
264 | 1 | |c 2018transfer abstract | |
300 | |a 5 | ||
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a nicht spezifiziert |b z |2 rdamedia | ||
338 | |a nicht spezifiziert |b zu |2 rdacarrier | ||
520 | |a We speak to express ourselves. Sometimes words can capture what we mean; sometimes we mean more than can be said. This is where our visible gestures - those dynamic oscillations of our gaze, face, head, hand, arms and bodies – help. Not only do these co-verbal visual signals help express our intentions, attitudes and emotion, they also help us engage with our conversational partners to get our message across. Understanding how and when a message is supplemented, shaped and changed by auditory and visual signals is crucial for a science ultimately interesting in the correct interpretation of transmitted meaning. This special issue highlights research articles that explore co-verbal and nonverbal signals, a key topic in speech communication since these are crucial ingredients in the interpretation of meaning. That is, the meaning of speech is calibrated, augmented and even changed by co-verbal/speech behaviours and gestures including the talker's facial expression, eye-contact, gaze-direction, arm movements, hand gestures, body motion and orientation, posture, proximity, physical contact, and so on. Understanding expressive signals is a vital step for developing machines that can properly decipher intention and engage as social agents. The special issue is divided into three parts: Auditory-visual speech perception; Characterization and perception of auditory-visual prosody; Computer-generated auditory-visual speech. Below, we introduce these papers with a brief review of relevant issues and previous studies, when needed. | ||
520 | |a We speak to express ourselves. Sometimes words can capture what we mean; sometimes we mean more than can be said. This is where our visible gestures - those dynamic oscillations of our gaze, face, head, hand, arms and bodies – help. Not only do these co-verbal visual signals help express our intentions, attitudes and emotion, they also help us engage with our conversational partners to get our message across. Understanding how and when a message is supplemented, shaped and changed by auditory and visual signals is crucial for a science ultimately interesting in the correct interpretation of transmitted meaning. This special issue highlights research articles that explore co-verbal and nonverbal signals, a key topic in speech communication since these are crucial ingredients in the interpretation of meaning. That is, the meaning of speech is calibrated, augmented and even changed by co-verbal/speech behaviours and gestures including the talker's facial expression, eye-contact, gaze-direction, arm movements, hand gestures, body motion and orientation, posture, proximity, physical contact, and so on. Understanding expressive signals is a vital step for developing machines that can properly decipher intention and engage as social agents. The special issue is divided into three parts: Auditory-visual speech perception; Characterization and perception of auditory-visual prosody; Computer-generated auditory-visual speech. Below, we introduce these papers with a brief review of relevant issues and previous studies, when needed. | ||
700 | 1 | |a Bailly, Gérard |4 oth | |
700 | 1 | |a Davis, Chris |4 oth | |
773 | 0 | 8 | |i Enthalten in |n North-Holland Publ. Comp |a Zhang, Yumao ELSEVIER |t Comparison of dosing algorithms for acenocoumarol and phenprocoumon using clinical factors with the standard care in the Netherlands |d 2015 |d an interdisciplinary journal |g Amsterdam |w (DE-627)ELV024100463 |
773 | 1 | 8 | |g volume:98 |g year:2018 |g pages:63-67 |g extent:5 |
856 | 4 | 0 | |u https://doi.org/10.1016/j.specom.2018.02.001 |3 Volltext |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SYSFLAG_U | ||
912 | |a SSG-OLC-PHA | ||
912 | |a SSG-OPC-PHA | ||
912 | |a GBV_ILN_40 | ||
936 | b | k | |a 44.40 |j Pharmazie |j Pharmazeutika |q VZ |
951 | |a AR | ||
952 | |d 98 |j 2018 |h 63-67 |g 5 | ||
953 | |2 045F |a 070 |
author_variant |
j k jk |
---|---|
matchkey_str |
kimjeesunbaillygrarddavischris:2018----:nrdcinohseilsuoadtrvsaepesvsecade |
hierarchy_sort_str |
2018transfer abstract |
bklnumber |
44.40 |
publishDate |
2018 |
allfields |
10.1016/j.specom.2018.02.001 doi GBV00000000000165A.pica (DE-627)ELV042303885 (ELSEVIER)S0167-6393(18)30021-9 DE-627 ger DE-627 rakwb eng 070 400 070 DE-600 400 DE-600 610 VZ 610 VZ 610 VZ 44.40 bkl Kim, Jeesun verfasserin aut Introduction to the special issue on auditory-visual expressive speech and gesture in humans and machines 2018transfer abstract 5 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier We speak to express ourselves. Sometimes words can capture what we mean; sometimes we mean more than can be said. This is where our visible gestures - those dynamic oscillations of our gaze, face, head, hand, arms and bodies – help. Not only do these co-verbal visual signals help express our intentions, attitudes and emotion, they also help us engage with our conversational partners to get our message across. Understanding how and when a message is supplemented, shaped and changed by auditory and visual signals is crucial for a science ultimately interesting in the correct interpretation of transmitted meaning. This special issue highlights research articles that explore co-verbal and nonverbal signals, a key topic in speech communication since these are crucial ingredients in the interpretation of meaning. That is, the meaning of speech is calibrated, augmented and even changed by co-verbal/speech behaviours and gestures including the talker's facial expression, eye-contact, gaze-direction, arm movements, hand gestures, body motion and orientation, posture, proximity, physical contact, and so on. Understanding expressive signals is a vital step for developing machines that can properly decipher intention and engage as social agents. The special issue is divided into three parts: Auditory-visual speech perception; Characterization and perception of auditory-visual prosody; Computer-generated auditory-visual speech. Below, we introduce these papers with a brief review of relevant issues and previous studies, when needed. We speak to express ourselves. Sometimes words can capture what we mean; sometimes we mean more than can be said. This is where our visible gestures - those dynamic oscillations of our gaze, face, head, hand, arms and bodies – help. Not only do these co-verbal visual signals help express our intentions, attitudes and emotion, they also help us engage with our conversational partners to get our message across. Understanding how and when a message is supplemented, shaped and changed by auditory and visual signals is crucial for a science ultimately interesting in the correct interpretation of transmitted meaning. This special issue highlights research articles that explore co-verbal and nonverbal signals, a key topic in speech communication since these are crucial ingredients in the interpretation of meaning. That is, the meaning of speech is calibrated, augmented and even changed by co-verbal/speech behaviours and gestures including the talker's facial expression, eye-contact, gaze-direction, arm movements, hand gestures, body motion and orientation, posture, proximity, physical contact, and so on. Understanding expressive signals is a vital step for developing machines that can properly decipher intention and engage as social agents. The special issue is divided into three parts: Auditory-visual speech perception; Characterization and perception of auditory-visual prosody; Computer-generated auditory-visual speech. Below, we introduce these papers with a brief review of relevant issues and previous studies, when needed. Bailly, Gérard oth Davis, Chris oth Enthalten in North-Holland Publ. Comp Zhang, Yumao ELSEVIER Comparison of dosing algorithms for acenocoumarol and phenprocoumon using clinical factors with the standard care in the Netherlands 2015 an interdisciplinary journal Amsterdam (DE-627)ELV024100463 volume:98 year:2018 pages:63-67 extent:5 https://doi.org/10.1016/j.specom.2018.02.001 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA SSG-OPC-PHA GBV_ILN_40 44.40 Pharmazie Pharmazeutika VZ AR 98 2018 63-67 5 045F 070 |
spelling |
10.1016/j.specom.2018.02.001 doi GBV00000000000165A.pica (DE-627)ELV042303885 (ELSEVIER)S0167-6393(18)30021-9 DE-627 ger DE-627 rakwb eng 070 400 070 DE-600 400 DE-600 610 VZ 610 VZ 610 VZ 44.40 bkl Kim, Jeesun verfasserin aut Introduction to the special issue on auditory-visual expressive speech and gesture in humans and machines 2018transfer abstract 5 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier We speak to express ourselves. Sometimes words can capture what we mean; sometimes we mean more than can be said. This is where our visible gestures - those dynamic oscillations of our gaze, face, head, hand, arms and bodies – help. Not only do these co-verbal visual signals help express our intentions, attitudes and emotion, they also help us engage with our conversational partners to get our message across. Understanding how and when a message is supplemented, shaped and changed by auditory and visual signals is crucial for a science ultimately interesting in the correct interpretation of transmitted meaning. This special issue highlights research articles that explore co-verbal and nonverbal signals, a key topic in speech communication since these are crucial ingredients in the interpretation of meaning. That is, the meaning of speech is calibrated, augmented and even changed by co-verbal/speech behaviours and gestures including the talker's facial expression, eye-contact, gaze-direction, arm movements, hand gestures, body motion and orientation, posture, proximity, physical contact, and so on. Understanding expressive signals is a vital step for developing machines that can properly decipher intention and engage as social agents. The special issue is divided into three parts: Auditory-visual speech perception; Characterization and perception of auditory-visual prosody; Computer-generated auditory-visual speech. Below, we introduce these papers with a brief review of relevant issues and previous studies, when needed. We speak to express ourselves. Sometimes words can capture what we mean; sometimes we mean more than can be said. This is where our visible gestures - those dynamic oscillations of our gaze, face, head, hand, arms and bodies – help. Not only do these co-verbal visual signals help express our intentions, attitudes and emotion, they also help us engage with our conversational partners to get our message across. Understanding how and when a message is supplemented, shaped and changed by auditory and visual signals is crucial for a science ultimately interesting in the correct interpretation of transmitted meaning. This special issue highlights research articles that explore co-verbal and nonverbal signals, a key topic in speech communication since these are crucial ingredients in the interpretation of meaning. That is, the meaning of speech is calibrated, augmented and even changed by co-verbal/speech behaviours and gestures including the talker's facial expression, eye-contact, gaze-direction, arm movements, hand gestures, body motion and orientation, posture, proximity, physical contact, and so on. Understanding expressive signals is a vital step for developing machines that can properly decipher intention and engage as social agents. The special issue is divided into three parts: Auditory-visual speech perception; Characterization and perception of auditory-visual prosody; Computer-generated auditory-visual speech. Below, we introduce these papers with a brief review of relevant issues and previous studies, when needed. Bailly, Gérard oth Davis, Chris oth Enthalten in North-Holland Publ. Comp Zhang, Yumao ELSEVIER Comparison of dosing algorithms for acenocoumarol and phenprocoumon using clinical factors with the standard care in the Netherlands 2015 an interdisciplinary journal Amsterdam (DE-627)ELV024100463 volume:98 year:2018 pages:63-67 extent:5 https://doi.org/10.1016/j.specom.2018.02.001 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA SSG-OPC-PHA GBV_ILN_40 44.40 Pharmazie Pharmazeutika VZ AR 98 2018 63-67 5 045F 070 |
allfields_unstemmed |
10.1016/j.specom.2018.02.001 doi GBV00000000000165A.pica (DE-627)ELV042303885 (ELSEVIER)S0167-6393(18)30021-9 DE-627 ger DE-627 rakwb eng 070 400 070 DE-600 400 DE-600 610 VZ 610 VZ 610 VZ 44.40 bkl Kim, Jeesun verfasserin aut Introduction to the special issue on auditory-visual expressive speech and gesture in humans and machines 2018transfer abstract 5 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier We speak to express ourselves. Sometimes words can capture what we mean; sometimes we mean more than can be said. This is where our visible gestures - those dynamic oscillations of our gaze, face, head, hand, arms and bodies – help. Not only do these co-verbal visual signals help express our intentions, attitudes and emotion, they also help us engage with our conversational partners to get our message across. Understanding how and when a message is supplemented, shaped and changed by auditory and visual signals is crucial for a science ultimately interesting in the correct interpretation of transmitted meaning. This special issue highlights research articles that explore co-verbal and nonverbal signals, a key topic in speech communication since these are crucial ingredients in the interpretation of meaning. That is, the meaning of speech is calibrated, augmented and even changed by co-verbal/speech behaviours and gestures including the talker's facial expression, eye-contact, gaze-direction, arm movements, hand gestures, body motion and orientation, posture, proximity, physical contact, and so on. Understanding expressive signals is a vital step for developing machines that can properly decipher intention and engage as social agents. The special issue is divided into three parts: Auditory-visual speech perception; Characterization and perception of auditory-visual prosody; Computer-generated auditory-visual speech. Below, we introduce these papers with a brief review of relevant issues and previous studies, when needed. We speak to express ourselves. Sometimes words can capture what we mean; sometimes we mean more than can be said. This is where our visible gestures - those dynamic oscillations of our gaze, face, head, hand, arms and bodies – help. Not only do these co-verbal visual signals help express our intentions, attitudes and emotion, they also help us engage with our conversational partners to get our message across. Understanding how and when a message is supplemented, shaped and changed by auditory and visual signals is crucial for a science ultimately interesting in the correct interpretation of transmitted meaning. This special issue highlights research articles that explore co-verbal and nonverbal signals, a key topic in speech communication since these are crucial ingredients in the interpretation of meaning. That is, the meaning of speech is calibrated, augmented and even changed by co-verbal/speech behaviours and gestures including the talker's facial expression, eye-contact, gaze-direction, arm movements, hand gestures, body motion and orientation, posture, proximity, physical contact, and so on. Understanding expressive signals is a vital step for developing machines that can properly decipher intention and engage as social agents. The special issue is divided into three parts: Auditory-visual speech perception; Characterization and perception of auditory-visual prosody; Computer-generated auditory-visual speech. Below, we introduce these papers with a brief review of relevant issues and previous studies, when needed. Bailly, Gérard oth Davis, Chris oth Enthalten in North-Holland Publ. Comp Zhang, Yumao ELSEVIER Comparison of dosing algorithms for acenocoumarol and phenprocoumon using clinical factors with the standard care in the Netherlands 2015 an interdisciplinary journal Amsterdam (DE-627)ELV024100463 volume:98 year:2018 pages:63-67 extent:5 https://doi.org/10.1016/j.specom.2018.02.001 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA SSG-OPC-PHA GBV_ILN_40 44.40 Pharmazie Pharmazeutika VZ AR 98 2018 63-67 5 045F 070 |
allfieldsGer |
10.1016/j.specom.2018.02.001 doi GBV00000000000165A.pica (DE-627)ELV042303885 (ELSEVIER)S0167-6393(18)30021-9 DE-627 ger DE-627 rakwb eng 070 400 070 DE-600 400 DE-600 610 VZ 610 VZ 610 VZ 44.40 bkl Kim, Jeesun verfasserin aut Introduction to the special issue on auditory-visual expressive speech and gesture in humans and machines 2018transfer abstract 5 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier We speak to express ourselves. Sometimes words can capture what we mean; sometimes we mean more than can be said. This is where our visible gestures - those dynamic oscillations of our gaze, face, head, hand, arms and bodies – help. Not only do these co-verbal visual signals help express our intentions, attitudes and emotion, they also help us engage with our conversational partners to get our message across. Understanding how and when a message is supplemented, shaped and changed by auditory and visual signals is crucial for a science ultimately interesting in the correct interpretation of transmitted meaning. This special issue highlights research articles that explore co-verbal and nonverbal signals, a key topic in speech communication since these are crucial ingredients in the interpretation of meaning. That is, the meaning of speech is calibrated, augmented and even changed by co-verbal/speech behaviours and gestures including the talker's facial expression, eye-contact, gaze-direction, arm movements, hand gestures, body motion and orientation, posture, proximity, physical contact, and so on. Understanding expressive signals is a vital step for developing machines that can properly decipher intention and engage as social agents. The special issue is divided into three parts: Auditory-visual speech perception; Characterization and perception of auditory-visual prosody; Computer-generated auditory-visual speech. Below, we introduce these papers with a brief review of relevant issues and previous studies, when needed. We speak to express ourselves. Sometimes words can capture what we mean; sometimes we mean more than can be said. This is where our visible gestures - those dynamic oscillations of our gaze, face, head, hand, arms and bodies – help. Not only do these co-verbal visual signals help express our intentions, attitudes and emotion, they also help us engage with our conversational partners to get our message across. Understanding how and when a message is supplemented, shaped and changed by auditory and visual signals is crucial for a science ultimately interesting in the correct interpretation of transmitted meaning. This special issue highlights research articles that explore co-verbal and nonverbal signals, a key topic in speech communication since these are crucial ingredients in the interpretation of meaning. That is, the meaning of speech is calibrated, augmented and even changed by co-verbal/speech behaviours and gestures including the talker's facial expression, eye-contact, gaze-direction, arm movements, hand gestures, body motion and orientation, posture, proximity, physical contact, and so on. Understanding expressive signals is a vital step for developing machines that can properly decipher intention and engage as social agents. The special issue is divided into three parts: Auditory-visual speech perception; Characterization and perception of auditory-visual prosody; Computer-generated auditory-visual speech. Below, we introduce these papers with a brief review of relevant issues and previous studies, when needed. Bailly, Gérard oth Davis, Chris oth Enthalten in North-Holland Publ. Comp Zhang, Yumao ELSEVIER Comparison of dosing algorithms for acenocoumarol and phenprocoumon using clinical factors with the standard care in the Netherlands 2015 an interdisciplinary journal Amsterdam (DE-627)ELV024100463 volume:98 year:2018 pages:63-67 extent:5 https://doi.org/10.1016/j.specom.2018.02.001 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA SSG-OPC-PHA GBV_ILN_40 44.40 Pharmazie Pharmazeutika VZ AR 98 2018 63-67 5 045F 070 |
allfieldsSound |
10.1016/j.specom.2018.02.001 doi GBV00000000000165A.pica (DE-627)ELV042303885 (ELSEVIER)S0167-6393(18)30021-9 DE-627 ger DE-627 rakwb eng 070 400 070 DE-600 400 DE-600 610 VZ 610 VZ 610 VZ 44.40 bkl Kim, Jeesun verfasserin aut Introduction to the special issue on auditory-visual expressive speech and gesture in humans and machines 2018transfer abstract 5 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier We speak to express ourselves. Sometimes words can capture what we mean; sometimes we mean more than can be said. This is where our visible gestures - those dynamic oscillations of our gaze, face, head, hand, arms and bodies – help. Not only do these co-verbal visual signals help express our intentions, attitudes and emotion, they also help us engage with our conversational partners to get our message across. Understanding how and when a message is supplemented, shaped and changed by auditory and visual signals is crucial for a science ultimately interesting in the correct interpretation of transmitted meaning. This special issue highlights research articles that explore co-verbal and nonverbal signals, a key topic in speech communication since these are crucial ingredients in the interpretation of meaning. That is, the meaning of speech is calibrated, augmented and even changed by co-verbal/speech behaviours and gestures including the talker's facial expression, eye-contact, gaze-direction, arm movements, hand gestures, body motion and orientation, posture, proximity, physical contact, and so on. Understanding expressive signals is a vital step for developing machines that can properly decipher intention and engage as social agents. The special issue is divided into three parts: Auditory-visual speech perception; Characterization and perception of auditory-visual prosody; Computer-generated auditory-visual speech. Below, we introduce these papers with a brief review of relevant issues and previous studies, when needed. We speak to express ourselves. Sometimes words can capture what we mean; sometimes we mean more than can be said. This is where our visible gestures - those dynamic oscillations of our gaze, face, head, hand, arms and bodies – help. Not only do these co-verbal visual signals help express our intentions, attitudes and emotion, they also help us engage with our conversational partners to get our message across. Understanding how and when a message is supplemented, shaped and changed by auditory and visual signals is crucial for a science ultimately interesting in the correct interpretation of transmitted meaning. This special issue highlights research articles that explore co-verbal and nonverbal signals, a key topic in speech communication since these are crucial ingredients in the interpretation of meaning. That is, the meaning of speech is calibrated, augmented and even changed by co-verbal/speech behaviours and gestures including the talker's facial expression, eye-contact, gaze-direction, arm movements, hand gestures, body motion and orientation, posture, proximity, physical contact, and so on. Understanding expressive signals is a vital step for developing machines that can properly decipher intention and engage as social agents. The special issue is divided into three parts: Auditory-visual speech perception; Characterization and perception of auditory-visual prosody; Computer-generated auditory-visual speech. Below, we introduce these papers with a brief review of relevant issues and previous studies, when needed. Bailly, Gérard oth Davis, Chris oth Enthalten in North-Holland Publ. Comp Zhang, Yumao ELSEVIER Comparison of dosing algorithms for acenocoumarol and phenprocoumon using clinical factors with the standard care in the Netherlands 2015 an interdisciplinary journal Amsterdam (DE-627)ELV024100463 volume:98 year:2018 pages:63-67 extent:5 https://doi.org/10.1016/j.specom.2018.02.001 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA SSG-OPC-PHA GBV_ILN_40 44.40 Pharmazie Pharmazeutika VZ AR 98 2018 63-67 5 045F 070 |
language |
English |
source |
Enthalten in Comparison of dosing algorithms for acenocoumarol and phenprocoumon using clinical factors with the standard care in the Netherlands Amsterdam volume:98 year:2018 pages:63-67 extent:5 |
sourceStr |
Enthalten in Comparison of dosing algorithms for acenocoumarol and phenprocoumon using clinical factors with the standard care in the Netherlands Amsterdam volume:98 year:2018 pages:63-67 extent:5 |
format_phy_str_mv |
Article |
bklname |
Pharmazie Pharmazeutika |
institution |
findex.gbv.de |
dewey-raw |
070 |
isfreeaccess_bool |
false |
container_title |
Comparison of dosing algorithms for acenocoumarol and phenprocoumon using clinical factors with the standard care in the Netherlands |
authorswithroles_txt_mv |
Kim, Jeesun @@aut@@ Bailly, Gérard @@oth@@ Davis, Chris @@oth@@ |
publishDateDaySort_date |
2018-01-01T00:00:00Z |
hierarchy_top_id |
ELV024100463 |
dewey-sort |
270 |
id |
ELV042303885 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV042303885</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626001056.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">180726s2018 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.specom.2018.02.001</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">GBV00000000000165A.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV042303885</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0167-6393(18)30021-9</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">070</subfield><subfield code="a">400</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">070</subfield><subfield code="q">DE-600</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">400</subfield><subfield code="q">DE-600</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">44.40</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Kim, Jeesun</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Introduction to the special issue on auditory-visual expressive speech and gesture in humans and machines</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2018transfer abstract</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">5</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">We speak to express ourselves. Sometimes words can capture what we mean; sometimes we mean more than can be said. This is where our visible gestures - those dynamic oscillations of our gaze, face, head, hand, arms and bodies – help. Not only do these co-verbal visual signals help express our intentions, attitudes and emotion, they also help us engage with our conversational partners to get our message across. Understanding how and when a message is supplemented, shaped and changed by auditory and visual signals is crucial for a science ultimately interesting in the correct interpretation of transmitted meaning. This special issue highlights research articles that explore co-verbal and nonverbal signals, a key topic in speech communication since these are crucial ingredients in the interpretation of meaning. That is, the meaning of speech is calibrated, augmented and even changed by co-verbal/speech behaviours and gestures including the talker's facial expression, eye-contact, gaze-direction, arm movements, hand gestures, body motion and orientation, posture, proximity, physical contact, and so on. Understanding expressive signals is a vital step for developing machines that can properly decipher intention and engage as social agents. The special issue is divided into three parts: Auditory-visual speech perception; Characterization and perception of auditory-visual prosody; Computer-generated auditory-visual speech. Below, we introduce these papers with a brief review of relevant issues and previous studies, when needed.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">We speak to express ourselves. Sometimes words can capture what we mean; sometimes we mean more than can be said. This is where our visible gestures - those dynamic oscillations of our gaze, face, head, hand, arms and bodies – help. Not only do these co-verbal visual signals help express our intentions, attitudes and emotion, they also help us engage with our conversational partners to get our message across. Understanding how and when a message is supplemented, shaped and changed by auditory and visual signals is crucial for a science ultimately interesting in the correct interpretation of transmitted meaning. This special issue highlights research articles that explore co-verbal and nonverbal signals, a key topic in speech communication since these are crucial ingredients in the interpretation of meaning. That is, the meaning of speech is calibrated, augmented and even changed by co-verbal/speech behaviours and gestures including the talker's facial expression, eye-contact, gaze-direction, arm movements, hand gestures, body motion and orientation, posture, proximity, physical contact, and so on. Understanding expressive signals is a vital step for developing machines that can properly decipher intention and engage as social agents. The special issue is divided into three parts: Auditory-visual speech perception; Characterization and perception of auditory-visual prosody; Computer-generated auditory-visual speech. Below, we introduce these papers with a brief review of relevant issues and previous studies, when needed.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Bailly, Gérard</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Davis, Chris</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">North-Holland Publ. Comp</subfield><subfield code="a">Zhang, Yumao ELSEVIER</subfield><subfield code="t">Comparison of dosing algorithms for acenocoumarol and phenprocoumon using clinical factors with the standard care in the Netherlands</subfield><subfield code="d">2015</subfield><subfield code="d">an interdisciplinary journal</subfield><subfield code="g">Amsterdam</subfield><subfield code="w">(DE-627)ELV024100463</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:98</subfield><subfield code="g">year:2018</subfield><subfield code="g">pages:63-67</subfield><subfield code="g">extent:5</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.specom.2018.02.001</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OPC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">44.40</subfield><subfield code="j">Pharmazie</subfield><subfield code="j">Pharmazeutika</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">98</subfield><subfield code="j">2018</subfield><subfield code="h">63-67</subfield><subfield code="g">5</subfield></datafield><datafield tag="953" ind1=" " ind2=" "><subfield code="2">045F</subfield><subfield code="a">070</subfield></datafield></record></collection>
|
author |
Kim, Jeesun |
spellingShingle |
Kim, Jeesun ddc 070 ddc 400 ddc 610 bkl 44.40 Introduction to the special issue on auditory-visual expressive speech and gesture in humans and machines |
authorStr |
Kim, Jeesun |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)ELV024100463 |
format |
electronic Article |
dewey-ones |
070 - News media, journalism & publishing 400 - Language 610 - Medicine & health |
delete_txt_mv |
keep |
author_role |
aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
topic_title |
070 400 070 DE-600 400 DE-600 610 VZ 44.40 bkl Introduction to the special issue on auditory-visual expressive speech and gesture in humans and machines |
topic |
ddc 070 ddc 400 ddc 610 bkl 44.40 |
topic_unstemmed |
ddc 070 ddc 400 ddc 610 bkl 44.40 |
topic_browse |
ddc 070 ddc 400 ddc 610 bkl 44.40 |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
zu |
author2_variant |
g b gb c d cd |
hierarchy_parent_title |
Comparison of dosing algorithms for acenocoumarol and phenprocoumon using clinical factors with the standard care in the Netherlands |
hierarchy_parent_id |
ELV024100463 |
dewey-tens |
070 - News media, journalism & publishing 400 - Language 610 - Medicine & health |
hierarchy_top_title |
Comparison of dosing algorithms for acenocoumarol and phenprocoumon using clinical factors with the standard care in the Netherlands |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)ELV024100463 |
title |
Introduction to the special issue on auditory-visual expressive speech and gesture in humans and machines |
ctrlnum |
(DE-627)ELV042303885 (ELSEVIER)S0167-6393(18)30021-9 |
title_full |
Introduction to the special issue on auditory-visual expressive speech and gesture in humans and machines |
author_sort |
Kim, Jeesun |
journal |
Comparison of dosing algorithms for acenocoumarol and phenprocoumon using clinical factors with the standard care in the Netherlands |
journalStr |
Comparison of dosing algorithms for acenocoumarol and phenprocoumon using clinical factors with the standard care in the Netherlands |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works 400 - Language 600 - Technology |
recordtype |
marc |
publishDateSort |
2018 |
contenttype_str_mv |
zzz |
container_start_page |
63 |
author_browse |
Kim, Jeesun |
container_volume |
98 |
physical |
5 |
class |
070 400 070 DE-600 400 DE-600 610 VZ 44.40 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Kim, Jeesun |
doi_str_mv |
10.1016/j.specom.2018.02.001 |
dewey-full |
070 400 610 |
title_sort |
introduction to the special issue on auditory-visual expressive speech and gesture in humans and machines |
title_auth |
Introduction to the special issue on auditory-visual expressive speech and gesture in humans and machines |
abstract |
We speak to express ourselves. Sometimes words can capture what we mean; sometimes we mean more than can be said. This is where our visible gestures - those dynamic oscillations of our gaze, face, head, hand, arms and bodies – help. Not only do these co-verbal visual signals help express our intentions, attitudes and emotion, they also help us engage with our conversational partners to get our message across. Understanding how and when a message is supplemented, shaped and changed by auditory and visual signals is crucial for a science ultimately interesting in the correct interpretation of transmitted meaning. This special issue highlights research articles that explore co-verbal and nonverbal signals, a key topic in speech communication since these are crucial ingredients in the interpretation of meaning. That is, the meaning of speech is calibrated, augmented and even changed by co-verbal/speech behaviours and gestures including the talker's facial expression, eye-contact, gaze-direction, arm movements, hand gestures, body motion and orientation, posture, proximity, physical contact, and so on. Understanding expressive signals is a vital step for developing machines that can properly decipher intention and engage as social agents. The special issue is divided into three parts: Auditory-visual speech perception; Characterization and perception of auditory-visual prosody; Computer-generated auditory-visual speech. Below, we introduce these papers with a brief review of relevant issues and previous studies, when needed. |
abstractGer |
We speak to express ourselves. Sometimes words can capture what we mean; sometimes we mean more than can be said. This is where our visible gestures - those dynamic oscillations of our gaze, face, head, hand, arms and bodies – help. Not only do these co-verbal visual signals help express our intentions, attitudes and emotion, they also help us engage with our conversational partners to get our message across. Understanding how and when a message is supplemented, shaped and changed by auditory and visual signals is crucial for a science ultimately interesting in the correct interpretation of transmitted meaning. This special issue highlights research articles that explore co-verbal and nonverbal signals, a key topic in speech communication since these are crucial ingredients in the interpretation of meaning. That is, the meaning of speech is calibrated, augmented and even changed by co-verbal/speech behaviours and gestures including the talker's facial expression, eye-contact, gaze-direction, arm movements, hand gestures, body motion and orientation, posture, proximity, physical contact, and so on. Understanding expressive signals is a vital step for developing machines that can properly decipher intention and engage as social agents. The special issue is divided into three parts: Auditory-visual speech perception; Characterization and perception of auditory-visual prosody; Computer-generated auditory-visual speech. Below, we introduce these papers with a brief review of relevant issues and previous studies, when needed. |
abstract_unstemmed |
We speak to express ourselves. Sometimes words can capture what we mean; sometimes we mean more than can be said. This is where our visible gestures - those dynamic oscillations of our gaze, face, head, hand, arms and bodies – help. Not only do these co-verbal visual signals help express our intentions, attitudes and emotion, they also help us engage with our conversational partners to get our message across. Understanding how and when a message is supplemented, shaped and changed by auditory and visual signals is crucial for a science ultimately interesting in the correct interpretation of transmitted meaning. This special issue highlights research articles that explore co-verbal and nonverbal signals, a key topic in speech communication since these are crucial ingredients in the interpretation of meaning. That is, the meaning of speech is calibrated, augmented and even changed by co-verbal/speech behaviours and gestures including the talker's facial expression, eye-contact, gaze-direction, arm movements, hand gestures, body motion and orientation, posture, proximity, physical contact, and so on. Understanding expressive signals is a vital step for developing machines that can properly decipher intention and engage as social agents. The special issue is divided into three parts: Auditory-visual speech perception; Characterization and perception of auditory-visual prosody; Computer-generated auditory-visual speech. Below, we introduce these papers with a brief review of relevant issues and previous studies, when needed. |
collection_details |
GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA SSG-OPC-PHA GBV_ILN_40 |
title_short |
Introduction to the special issue on auditory-visual expressive speech and gesture in humans and machines |
url |
https://doi.org/10.1016/j.specom.2018.02.001 |
remote_bool |
true |
author2 |
Bailly, Gérard Davis, Chris |
author2Str |
Bailly, Gérard Davis, Chris |
ppnlink |
ELV024100463 |
mediatype_str_mv |
z |
isOA_txt |
false |
hochschulschrift_bool |
false |
author2_role |
oth oth |
doi_str |
10.1016/j.specom.2018.02.001 |
up_date |
2024-07-06T22:26:43.878Z |
_version_ |
1803870330033274880 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV042303885</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626001056.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">180726s2018 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.specom.2018.02.001</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">GBV00000000000165A.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV042303885</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0167-6393(18)30021-9</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">070</subfield><subfield code="a">400</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">070</subfield><subfield code="q">DE-600</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">400</subfield><subfield code="q">DE-600</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">44.40</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Kim, Jeesun</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Introduction to the special issue on auditory-visual expressive speech and gesture in humans and machines</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2018transfer abstract</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">5</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">We speak to express ourselves. Sometimes words can capture what we mean; sometimes we mean more than can be said. This is where our visible gestures - those dynamic oscillations of our gaze, face, head, hand, arms and bodies – help. Not only do these co-verbal visual signals help express our intentions, attitudes and emotion, they also help us engage with our conversational partners to get our message across. Understanding how and when a message is supplemented, shaped and changed by auditory and visual signals is crucial for a science ultimately interesting in the correct interpretation of transmitted meaning. This special issue highlights research articles that explore co-verbal and nonverbal signals, a key topic in speech communication since these are crucial ingredients in the interpretation of meaning. That is, the meaning of speech is calibrated, augmented and even changed by co-verbal/speech behaviours and gestures including the talker's facial expression, eye-contact, gaze-direction, arm movements, hand gestures, body motion and orientation, posture, proximity, physical contact, and so on. Understanding expressive signals is a vital step for developing machines that can properly decipher intention and engage as social agents. The special issue is divided into three parts: Auditory-visual speech perception; Characterization and perception of auditory-visual prosody; Computer-generated auditory-visual speech. Below, we introduce these papers with a brief review of relevant issues and previous studies, when needed.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">We speak to express ourselves. Sometimes words can capture what we mean; sometimes we mean more than can be said. This is where our visible gestures - those dynamic oscillations of our gaze, face, head, hand, arms and bodies – help. Not only do these co-verbal visual signals help express our intentions, attitudes and emotion, they also help us engage with our conversational partners to get our message across. Understanding how and when a message is supplemented, shaped and changed by auditory and visual signals is crucial for a science ultimately interesting in the correct interpretation of transmitted meaning. This special issue highlights research articles that explore co-verbal and nonverbal signals, a key topic in speech communication since these are crucial ingredients in the interpretation of meaning. That is, the meaning of speech is calibrated, augmented and even changed by co-verbal/speech behaviours and gestures including the talker's facial expression, eye-contact, gaze-direction, arm movements, hand gestures, body motion and orientation, posture, proximity, physical contact, and so on. Understanding expressive signals is a vital step for developing machines that can properly decipher intention and engage as social agents. The special issue is divided into three parts: Auditory-visual speech perception; Characterization and perception of auditory-visual prosody; Computer-generated auditory-visual speech. Below, we introduce these papers with a brief review of relevant issues and previous studies, when needed.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Bailly, Gérard</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Davis, Chris</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">North-Holland Publ. Comp</subfield><subfield code="a">Zhang, Yumao ELSEVIER</subfield><subfield code="t">Comparison of dosing algorithms for acenocoumarol and phenprocoumon using clinical factors with the standard care in the Netherlands</subfield><subfield code="d">2015</subfield><subfield code="d">an interdisciplinary journal</subfield><subfield code="g">Amsterdam</subfield><subfield code="w">(DE-627)ELV024100463</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:98</subfield><subfield code="g">year:2018</subfield><subfield code="g">pages:63-67</subfield><subfield code="g">extent:5</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.specom.2018.02.001</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OPC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">44.40</subfield><subfield code="j">Pharmazie</subfield><subfield code="j">Pharmazeutika</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">98</subfield><subfield code="j">2018</subfield><subfield code="h">63-67</subfield><subfield code="g">5</subfield></datafield><datafield tag="953" ind1=" " ind2=" "><subfield code="2">045F</subfield><subfield code="a">070</subfield></datafield></record></collection>
|
score |
7.398943 |