Understanding environmental sounds in sentence context
There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can...
Ausführliche Beschreibung
Autor*in: |
Uddin, Sophia [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2018transfer abstract |
---|
Schlagwörter: |
---|
Umfang: |
10 |
---|
Übergeordnetes Werk: |
Enthalten in: Preconception tests at advanced maternal age - Chronopoulou, Elpiniki ELSEVIER, 2020, international journal of cognitive science, Amsterdam [u.a.] |
---|---|
Übergeordnetes Werk: |
volume:172 ; year:2018 ; pages:134-143 ; extent:10 |
Links: |
---|
DOI / URN: |
10.1016/j.cognition.2017.12.009 |
---|
Katalog-ID: |
ELV043297471 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV043297471 | ||
003 | DE-627 | ||
005 | 20230626003451.0 | ||
007 | cr uuu---uuuuu | ||
008 | 180726s2018 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.cognition.2017.12.009 |2 doi | |
028 | 5 | 2 | |a /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001018.pica |
035 | |a (DE-627)ELV043297471 | ||
035 | |a (ELSEVIER)S0010-0277(17)30329-3 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 610 |q VZ |
084 | |a 44.92 |2 bkl | ||
100 | 1 | |a Uddin, Sophia |e verfasserin |4 aut | |
245 | 1 | 0 | |a Understanding environmental sounds in sentence context |
264 | 1 | |c 2018transfer abstract | |
300 | |a 10 | ||
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a nicht spezifiziert |b z |2 rdamedia | ||
338 | |a nicht spezifiziert |b zu |2 rdacarrier | ||
520 | |a There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions. | ||
520 | |a There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions. | ||
650 | 7 | |a Context |2 Elsevier | |
650 | 7 | |a Environmental sound perception |2 Elsevier | |
650 | 7 | |a Language |2 Elsevier | |
650 | 7 | |a Constraint |2 Elsevier | |
650 | 7 | |a Recognition |2 Elsevier | |
650 | 7 | |a Speech perception |2 Elsevier | |
700 | 1 | |a Heald, Shannon L.M. |4 oth | |
700 | 1 | |a Van Hedger, Stephen C. |4 oth | |
700 | 1 | |a Klos, Serena |4 oth | |
700 | 1 | |a Nusbaum, Howard C. |4 oth | |
773 | 0 | 8 | |i Enthalten in |n Elsevier Science |a Chronopoulou, Elpiniki ELSEVIER |t Preconception tests at advanced maternal age |d 2020 |d international journal of cognitive science |g Amsterdam [u.a.] |w (DE-627)ELV005439426 |
773 | 1 | 8 | |g volume:172 |g year:2018 |g pages:134-143 |g extent:10 |
856 | 4 | 0 | |u https://doi.org/10.1016/j.cognition.2017.12.009 |3 Volltext |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SYSFLAG_U | ||
936 | b | k | |a 44.92 |j Gynäkologie |q VZ |
951 | |a AR | ||
952 | |d 172 |j 2018 |h 134-143 |g 10 |
author_variant |
s u su |
---|---|
matchkey_str |
uddinsophiahealdshannonlmvanhedgerstephe:2018----:nesadnevrnetlonsne |
hierarchy_sort_str |
2018transfer abstract |
bklnumber |
44.92 |
publishDate |
2018 |
allfields |
10.1016/j.cognition.2017.12.009 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001018.pica (DE-627)ELV043297471 (ELSEVIER)S0010-0277(17)30329-3 DE-627 ger DE-627 rakwb eng 610 VZ 44.92 bkl Uddin, Sophia verfasserin aut Understanding environmental sounds in sentence context 2018transfer abstract 10 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions. There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions. Context Elsevier Environmental sound perception Elsevier Language Elsevier Constraint Elsevier Recognition Elsevier Speech perception Elsevier Heald, Shannon L.M. oth Van Hedger, Stephen C. oth Klos, Serena oth Nusbaum, Howard C. oth Enthalten in Elsevier Science Chronopoulou, Elpiniki ELSEVIER Preconception tests at advanced maternal age 2020 international journal of cognitive science Amsterdam [u.a.] (DE-627)ELV005439426 volume:172 year:2018 pages:134-143 extent:10 https://doi.org/10.1016/j.cognition.2017.12.009 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U 44.92 Gynäkologie VZ AR 172 2018 134-143 10 |
spelling |
10.1016/j.cognition.2017.12.009 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001018.pica (DE-627)ELV043297471 (ELSEVIER)S0010-0277(17)30329-3 DE-627 ger DE-627 rakwb eng 610 VZ 44.92 bkl Uddin, Sophia verfasserin aut Understanding environmental sounds in sentence context 2018transfer abstract 10 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions. There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions. Context Elsevier Environmental sound perception Elsevier Language Elsevier Constraint Elsevier Recognition Elsevier Speech perception Elsevier Heald, Shannon L.M. oth Van Hedger, Stephen C. oth Klos, Serena oth Nusbaum, Howard C. oth Enthalten in Elsevier Science Chronopoulou, Elpiniki ELSEVIER Preconception tests at advanced maternal age 2020 international journal of cognitive science Amsterdam [u.a.] (DE-627)ELV005439426 volume:172 year:2018 pages:134-143 extent:10 https://doi.org/10.1016/j.cognition.2017.12.009 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U 44.92 Gynäkologie VZ AR 172 2018 134-143 10 |
allfields_unstemmed |
10.1016/j.cognition.2017.12.009 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001018.pica (DE-627)ELV043297471 (ELSEVIER)S0010-0277(17)30329-3 DE-627 ger DE-627 rakwb eng 610 VZ 44.92 bkl Uddin, Sophia verfasserin aut Understanding environmental sounds in sentence context 2018transfer abstract 10 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions. There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions. Context Elsevier Environmental sound perception Elsevier Language Elsevier Constraint Elsevier Recognition Elsevier Speech perception Elsevier Heald, Shannon L.M. oth Van Hedger, Stephen C. oth Klos, Serena oth Nusbaum, Howard C. oth Enthalten in Elsevier Science Chronopoulou, Elpiniki ELSEVIER Preconception tests at advanced maternal age 2020 international journal of cognitive science Amsterdam [u.a.] (DE-627)ELV005439426 volume:172 year:2018 pages:134-143 extent:10 https://doi.org/10.1016/j.cognition.2017.12.009 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U 44.92 Gynäkologie VZ AR 172 2018 134-143 10 |
allfieldsGer |
10.1016/j.cognition.2017.12.009 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001018.pica (DE-627)ELV043297471 (ELSEVIER)S0010-0277(17)30329-3 DE-627 ger DE-627 rakwb eng 610 VZ 44.92 bkl Uddin, Sophia verfasserin aut Understanding environmental sounds in sentence context 2018transfer abstract 10 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions. There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions. Context Elsevier Environmental sound perception Elsevier Language Elsevier Constraint Elsevier Recognition Elsevier Speech perception Elsevier Heald, Shannon L.M. oth Van Hedger, Stephen C. oth Klos, Serena oth Nusbaum, Howard C. oth Enthalten in Elsevier Science Chronopoulou, Elpiniki ELSEVIER Preconception tests at advanced maternal age 2020 international journal of cognitive science Amsterdam [u.a.] (DE-627)ELV005439426 volume:172 year:2018 pages:134-143 extent:10 https://doi.org/10.1016/j.cognition.2017.12.009 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U 44.92 Gynäkologie VZ AR 172 2018 134-143 10 |
allfieldsSound |
10.1016/j.cognition.2017.12.009 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001018.pica (DE-627)ELV043297471 (ELSEVIER)S0010-0277(17)30329-3 DE-627 ger DE-627 rakwb eng 610 VZ 44.92 bkl Uddin, Sophia verfasserin aut Understanding environmental sounds in sentence context 2018transfer abstract 10 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions. There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions. Context Elsevier Environmental sound perception Elsevier Language Elsevier Constraint Elsevier Recognition Elsevier Speech perception Elsevier Heald, Shannon L.M. oth Van Hedger, Stephen C. oth Klos, Serena oth Nusbaum, Howard C. oth Enthalten in Elsevier Science Chronopoulou, Elpiniki ELSEVIER Preconception tests at advanced maternal age 2020 international journal of cognitive science Amsterdam [u.a.] (DE-627)ELV005439426 volume:172 year:2018 pages:134-143 extent:10 https://doi.org/10.1016/j.cognition.2017.12.009 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U 44.92 Gynäkologie VZ AR 172 2018 134-143 10 |
language |
English |
source |
Enthalten in Preconception tests at advanced maternal age Amsterdam [u.a.] volume:172 year:2018 pages:134-143 extent:10 |
sourceStr |
Enthalten in Preconception tests at advanced maternal age Amsterdam [u.a.] volume:172 year:2018 pages:134-143 extent:10 |
format_phy_str_mv |
Article |
bklname |
Gynäkologie |
institution |
findex.gbv.de |
topic_facet |
Context Environmental sound perception Language Constraint Recognition Speech perception |
dewey-raw |
610 |
isfreeaccess_bool |
false |
container_title |
Preconception tests at advanced maternal age |
authorswithroles_txt_mv |
Uddin, Sophia @@aut@@ Heald, Shannon L.M. @@oth@@ Van Hedger, Stephen C. @@oth@@ Klos, Serena @@oth@@ Nusbaum, Howard C. @@oth@@ |
publishDateDaySort_date |
2018-01-01T00:00:00Z |
hierarchy_top_id |
ELV005439426 |
dewey-sort |
3610 |
id |
ELV043297471 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV043297471</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626003451.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">180726s2018 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.cognition.2017.12.009</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001018.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV043297471</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0010-0277(17)30329-3</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">44.92</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Uddin, Sophia</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Understanding environmental sounds in sentence context</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2018transfer abstract</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">10</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Context</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Environmental sound perception</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Language</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Constraint</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Recognition</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Speech perception</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Heald, Shannon L.M.</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Van Hedger, Stephen C.</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Klos, Serena</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Nusbaum, Howard C.</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Elsevier Science</subfield><subfield code="a">Chronopoulou, Elpiniki ELSEVIER</subfield><subfield code="t">Preconception tests at advanced maternal age</subfield><subfield code="d">2020</subfield><subfield code="d">international journal of cognitive science</subfield><subfield code="g">Amsterdam [u.a.]</subfield><subfield code="w">(DE-627)ELV005439426</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:172</subfield><subfield code="g">year:2018</subfield><subfield code="g">pages:134-143</subfield><subfield code="g">extent:10</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.cognition.2017.12.009</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">44.92</subfield><subfield code="j">Gynäkologie</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">172</subfield><subfield code="j">2018</subfield><subfield code="h">134-143</subfield><subfield code="g">10</subfield></datafield></record></collection>
|
author |
Uddin, Sophia |
spellingShingle |
Uddin, Sophia ddc 610 bkl 44.92 Elsevier Context Elsevier Environmental sound perception Elsevier Language Elsevier Constraint Elsevier Recognition Elsevier Speech perception Understanding environmental sounds in sentence context |
authorStr |
Uddin, Sophia |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)ELV005439426 |
format |
electronic Article |
dewey-ones |
610 - Medicine & health |
delete_txt_mv |
keep |
author_role |
aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
topic_title |
610 VZ 44.92 bkl Understanding environmental sounds in sentence context Context Elsevier Environmental sound perception Elsevier Language Elsevier Constraint Elsevier Recognition Elsevier Speech perception Elsevier |
topic |
ddc 610 bkl 44.92 Elsevier Context Elsevier Environmental sound perception Elsevier Language Elsevier Constraint Elsevier Recognition Elsevier Speech perception |
topic_unstemmed |
ddc 610 bkl 44.92 Elsevier Context Elsevier Environmental sound perception Elsevier Language Elsevier Constraint Elsevier Recognition Elsevier Speech perception |
topic_browse |
ddc 610 bkl 44.92 Elsevier Context Elsevier Environmental sound perception Elsevier Language Elsevier Constraint Elsevier Recognition Elsevier Speech perception |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
zu |
author2_variant |
s l h sl slh h s c v hsc hscv s k sk h c n hc hcn |
hierarchy_parent_title |
Preconception tests at advanced maternal age |
hierarchy_parent_id |
ELV005439426 |
dewey-tens |
610 - Medicine & health |
hierarchy_top_title |
Preconception tests at advanced maternal age |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)ELV005439426 |
title |
Understanding environmental sounds in sentence context |
ctrlnum |
(DE-627)ELV043297471 (ELSEVIER)S0010-0277(17)30329-3 |
title_full |
Understanding environmental sounds in sentence context |
author_sort |
Uddin, Sophia |
journal |
Preconception tests at advanced maternal age |
journalStr |
Preconception tests at advanced maternal age |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
600 - Technology |
recordtype |
marc |
publishDateSort |
2018 |
contenttype_str_mv |
zzz |
container_start_page |
134 |
author_browse |
Uddin, Sophia |
container_volume |
172 |
physical |
10 |
class |
610 VZ 44.92 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Uddin, Sophia |
doi_str_mv |
10.1016/j.cognition.2017.12.009 |
dewey-full |
610 |
title_sort |
understanding environmental sounds in sentence context |
title_auth |
Understanding environmental sounds in sentence context |
abstract |
There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions. |
abstractGer |
There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions. |
abstract_unstemmed |
There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions. |
collection_details |
GBV_USEFLAG_U GBV_ELV SYSFLAG_U |
title_short |
Understanding environmental sounds in sentence context |
url |
https://doi.org/10.1016/j.cognition.2017.12.009 |
remote_bool |
true |
author2 |
Heald, Shannon L.M. Van Hedger, Stephen C. Klos, Serena Nusbaum, Howard C. |
author2Str |
Heald, Shannon L.M. Van Hedger, Stephen C. Klos, Serena Nusbaum, Howard C. |
ppnlink |
ELV005439426 |
mediatype_str_mv |
z |
isOA_txt |
false |
hochschulschrift_bool |
false |
author2_role |
oth oth oth oth |
doi_str |
10.1016/j.cognition.2017.12.009 |
up_date |
2024-07-06T18:27:17.783Z |
_version_ |
1803855266088747008 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV043297471</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626003451.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">180726s2018 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.cognition.2017.12.009</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001018.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV043297471</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0010-0277(17)30329-3</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">44.92</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Uddin, Sophia</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Understanding environmental sounds in sentence context</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2018transfer abstract</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">10</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Context</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Environmental sound perception</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Language</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Constraint</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Recognition</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Speech perception</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Heald, Shannon L.M.</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Van Hedger, Stephen C.</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Klos, Serena</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Nusbaum, Howard C.</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Elsevier Science</subfield><subfield code="a">Chronopoulou, Elpiniki ELSEVIER</subfield><subfield code="t">Preconception tests at advanced maternal age</subfield><subfield code="d">2020</subfield><subfield code="d">international journal of cognitive science</subfield><subfield code="g">Amsterdam [u.a.]</subfield><subfield code="w">(DE-627)ELV005439426</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:172</subfield><subfield code="g">year:2018</subfield><subfield code="g">pages:134-143</subfield><subfield code="g">extent:10</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.cognition.2017.12.009</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">44.92</subfield><subfield code="j">Gynäkologie</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">172</subfield><subfield code="j">2018</subfield><subfield code="h">134-143</subfield><subfield code="g">10</subfield></datafield></record></collection>
|
score |
7.4017506 |