Logit tree models for discrete choice data with application to advice-seeking preferences among Chinese Christians
Abstract Logit models are popular tools for analyzing discrete choice and ranking data. The models assume that judges rate each item with a measurable utility, and the ordering of a judge’s utilities determines the outcome. Logit models have been proven to be powerful tools, but they become difficul...
Ausführliche Beschreibung
Autor*in: |
Yu, Philip L. H. [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2015 |
---|
Schlagwörter: |
---|
Anmerkung: |
© Springer-Verlag Berlin Heidelberg 2015 |
---|
Übergeordnetes Werk: |
Enthalten in: Computational statistics - Springer Berlin Heidelberg, 1992, 31(2015), 2 vom: 13. Juni, Seite 799-827 |
---|---|
Übergeordnetes Werk: |
volume:31 ; year:2015 ; number:2 ; day:13 ; month:06 ; pages:799-827 |
Links: |
---|
DOI / URN: |
10.1007/s00180-015-0588-4 |
---|
Katalog-ID: |
OLC2070884554 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | OLC2070884554 | ||
003 | DE-627 | ||
005 | 20230323142611.0 | ||
007 | tu | ||
008 | 200820s2015 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s00180-015-0588-4 |2 doi | |
035 | |a (DE-627)OLC2070884554 | ||
035 | |a (DE-He213)s00180-015-0588-4-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 510 |a 004 |q VZ |
100 | 1 | |a Yu, Philip L. H. |e verfasserin |4 aut | |
245 | 1 | 0 | |a Logit tree models for discrete choice data with application to advice-seeking preferences among Chinese Christians |
264 | 1 | |c 2015 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © Springer-Verlag Berlin Heidelberg 2015 | ||
520 | |a Abstract Logit models are popular tools for analyzing discrete choice and ranking data. The models assume that judges rate each item with a measurable utility, and the ordering of a judge’s utilities determines the outcome. Logit models have been proven to be powerful tools, but they become difficult to interpret if the models contain nonlinear and interaction terms. We extended the logit models by adding a decision tree structure to overcome this difficulty. We introduced a new method of tree splitting variable selection that distinguishes the nonlinear and linear effects, and the variable with the strongest nonlinear effect will be selected in the view that linear effect is best modeled using the logit model. Decision trees built in this fashion were shown to have smaller sizes than those using loglikelihood-based splitting criteria. In addition, the proposed splitting methods could save computational time and avoid bias in choosing the optimal splitting variable. Issues on variable selection in logit models are also investigated, and forward selection criterion was shown to work well with logit tree models. Focused on ranking data, simulations are carried out and the results showed that our proposed splitting methods are unbiased. Finally, to demonstrate the feasibility of the logit tree models, they were applied to analyze two datasets, one with binary outcome and the other with ranking outcome. | ||
650 | 4 | |a Binary data | |
650 | 4 | |a Decision tree | |
650 | 4 | |a Multinomial data | |
650 | 4 | |a Ranking data | |
650 | 4 | |a Variable selection | |
700 | 1 | |a Lee, Paul H. |4 aut | |
700 | 1 | |a Cheung, S. F. |4 aut | |
700 | 1 | |a Lau, Esther Y. Y. |4 aut | |
700 | 1 | |a Mok, Doris S. Y. |4 aut | |
700 | 1 | |a Hui, Harry C. |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Computational statistics |d Springer Berlin Heidelberg, 1992 |g 31(2015), 2 vom: 13. Juni, Seite 799-827 |w (DE-627)131054694 |w (DE-600)1104678-8 |w (DE-576)028053559 |x 0943-4062 |7 nnns |
773 | 1 | 8 | |g volume:31 |g year:2015 |g number:2 |g day:13 |g month:06 |g pages:799-827 |
856 | 4 | 1 | |u https://doi.org/10.1007/s00180-015-0588-4 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-MAT | ||
912 | |a SSG-OPC-MAT | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_267 | ||
912 | |a GBV_ILN_2018 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_4305 | ||
951 | |a AR | ||
952 | |d 31 |j 2015 |e 2 |b 13 |c 06 |h 799-827 |
author_variant |
p l h y plh plhy p h l ph phl s f c sf sfc e y y l eyy eyyl d s y m dsy dsym h c h hc hch |
---|---|
matchkey_str |
article:09434062:2015----::oitemdlfriceehieaaihplctotavcseigrfr |
hierarchy_sort_str |
2015 |
publishDate |
2015 |
allfields |
10.1007/s00180-015-0588-4 doi (DE-627)OLC2070884554 (DE-He213)s00180-015-0588-4-p DE-627 ger DE-627 rakwb eng 510 004 VZ Yu, Philip L. H. verfasserin aut Logit tree models for discrete choice data with application to advice-seeking preferences among Chinese Christians 2015 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer-Verlag Berlin Heidelberg 2015 Abstract Logit models are popular tools for analyzing discrete choice and ranking data. The models assume that judges rate each item with a measurable utility, and the ordering of a judge’s utilities determines the outcome. Logit models have been proven to be powerful tools, but they become difficult to interpret if the models contain nonlinear and interaction terms. We extended the logit models by adding a decision tree structure to overcome this difficulty. We introduced a new method of tree splitting variable selection that distinguishes the nonlinear and linear effects, and the variable with the strongest nonlinear effect will be selected in the view that linear effect is best modeled using the logit model. Decision trees built in this fashion were shown to have smaller sizes than those using loglikelihood-based splitting criteria. In addition, the proposed splitting methods could save computational time and avoid bias in choosing the optimal splitting variable. Issues on variable selection in logit models are also investigated, and forward selection criterion was shown to work well with logit tree models. Focused on ranking data, simulations are carried out and the results showed that our proposed splitting methods are unbiased. Finally, to demonstrate the feasibility of the logit tree models, they were applied to analyze two datasets, one with binary outcome and the other with ranking outcome. Binary data Decision tree Multinomial data Ranking data Variable selection Lee, Paul H. aut Cheung, S. F. aut Lau, Esther Y. Y. aut Mok, Doris S. Y. aut Hui, Harry C. aut Enthalten in Computational statistics Springer Berlin Heidelberg, 1992 31(2015), 2 vom: 13. Juni, Seite 799-827 (DE-627)131054694 (DE-600)1104678-8 (DE-576)028053559 0943-4062 nnns volume:31 year:2015 number:2 day:13 month:06 pages:799-827 https://doi.org/10.1007/s00180-015-0588-4 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OPC-MAT GBV_ILN_70 GBV_ILN_267 GBV_ILN_2018 GBV_ILN_2088 GBV_ILN_4305 AR 31 2015 2 13 06 799-827 |
spelling |
10.1007/s00180-015-0588-4 doi (DE-627)OLC2070884554 (DE-He213)s00180-015-0588-4-p DE-627 ger DE-627 rakwb eng 510 004 VZ Yu, Philip L. H. verfasserin aut Logit tree models for discrete choice data with application to advice-seeking preferences among Chinese Christians 2015 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer-Verlag Berlin Heidelberg 2015 Abstract Logit models are popular tools for analyzing discrete choice and ranking data. The models assume that judges rate each item with a measurable utility, and the ordering of a judge’s utilities determines the outcome. Logit models have been proven to be powerful tools, but they become difficult to interpret if the models contain nonlinear and interaction terms. We extended the logit models by adding a decision tree structure to overcome this difficulty. We introduced a new method of tree splitting variable selection that distinguishes the nonlinear and linear effects, and the variable with the strongest nonlinear effect will be selected in the view that linear effect is best modeled using the logit model. Decision trees built in this fashion were shown to have smaller sizes than those using loglikelihood-based splitting criteria. In addition, the proposed splitting methods could save computational time and avoid bias in choosing the optimal splitting variable. Issues on variable selection in logit models are also investigated, and forward selection criterion was shown to work well with logit tree models. Focused on ranking data, simulations are carried out and the results showed that our proposed splitting methods are unbiased. Finally, to demonstrate the feasibility of the logit tree models, they were applied to analyze two datasets, one with binary outcome and the other with ranking outcome. Binary data Decision tree Multinomial data Ranking data Variable selection Lee, Paul H. aut Cheung, S. F. aut Lau, Esther Y. Y. aut Mok, Doris S. Y. aut Hui, Harry C. aut Enthalten in Computational statistics Springer Berlin Heidelberg, 1992 31(2015), 2 vom: 13. Juni, Seite 799-827 (DE-627)131054694 (DE-600)1104678-8 (DE-576)028053559 0943-4062 nnns volume:31 year:2015 number:2 day:13 month:06 pages:799-827 https://doi.org/10.1007/s00180-015-0588-4 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OPC-MAT GBV_ILN_70 GBV_ILN_267 GBV_ILN_2018 GBV_ILN_2088 GBV_ILN_4305 AR 31 2015 2 13 06 799-827 |
allfields_unstemmed |
10.1007/s00180-015-0588-4 doi (DE-627)OLC2070884554 (DE-He213)s00180-015-0588-4-p DE-627 ger DE-627 rakwb eng 510 004 VZ Yu, Philip L. H. verfasserin aut Logit tree models for discrete choice data with application to advice-seeking preferences among Chinese Christians 2015 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer-Verlag Berlin Heidelberg 2015 Abstract Logit models are popular tools for analyzing discrete choice and ranking data. The models assume that judges rate each item with a measurable utility, and the ordering of a judge’s utilities determines the outcome. Logit models have been proven to be powerful tools, but they become difficult to interpret if the models contain nonlinear and interaction terms. We extended the logit models by adding a decision tree structure to overcome this difficulty. We introduced a new method of tree splitting variable selection that distinguishes the nonlinear and linear effects, and the variable with the strongest nonlinear effect will be selected in the view that linear effect is best modeled using the logit model. Decision trees built in this fashion were shown to have smaller sizes than those using loglikelihood-based splitting criteria. In addition, the proposed splitting methods could save computational time and avoid bias in choosing the optimal splitting variable. Issues on variable selection in logit models are also investigated, and forward selection criterion was shown to work well with logit tree models. Focused on ranking data, simulations are carried out and the results showed that our proposed splitting methods are unbiased. Finally, to demonstrate the feasibility of the logit tree models, they were applied to analyze two datasets, one with binary outcome and the other with ranking outcome. Binary data Decision tree Multinomial data Ranking data Variable selection Lee, Paul H. aut Cheung, S. F. aut Lau, Esther Y. Y. aut Mok, Doris S. Y. aut Hui, Harry C. aut Enthalten in Computational statistics Springer Berlin Heidelberg, 1992 31(2015), 2 vom: 13. Juni, Seite 799-827 (DE-627)131054694 (DE-600)1104678-8 (DE-576)028053559 0943-4062 nnns volume:31 year:2015 number:2 day:13 month:06 pages:799-827 https://doi.org/10.1007/s00180-015-0588-4 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OPC-MAT GBV_ILN_70 GBV_ILN_267 GBV_ILN_2018 GBV_ILN_2088 GBV_ILN_4305 AR 31 2015 2 13 06 799-827 |
allfieldsGer |
10.1007/s00180-015-0588-4 doi (DE-627)OLC2070884554 (DE-He213)s00180-015-0588-4-p DE-627 ger DE-627 rakwb eng 510 004 VZ Yu, Philip L. H. verfasserin aut Logit tree models for discrete choice data with application to advice-seeking preferences among Chinese Christians 2015 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer-Verlag Berlin Heidelberg 2015 Abstract Logit models are popular tools for analyzing discrete choice and ranking data. The models assume that judges rate each item with a measurable utility, and the ordering of a judge’s utilities determines the outcome. Logit models have been proven to be powerful tools, but they become difficult to interpret if the models contain nonlinear and interaction terms. We extended the logit models by adding a decision tree structure to overcome this difficulty. We introduced a new method of tree splitting variable selection that distinguishes the nonlinear and linear effects, and the variable with the strongest nonlinear effect will be selected in the view that linear effect is best modeled using the logit model. Decision trees built in this fashion were shown to have smaller sizes than those using loglikelihood-based splitting criteria. In addition, the proposed splitting methods could save computational time and avoid bias in choosing the optimal splitting variable. Issues on variable selection in logit models are also investigated, and forward selection criterion was shown to work well with logit tree models. Focused on ranking data, simulations are carried out and the results showed that our proposed splitting methods are unbiased. Finally, to demonstrate the feasibility of the logit tree models, they were applied to analyze two datasets, one with binary outcome and the other with ranking outcome. Binary data Decision tree Multinomial data Ranking data Variable selection Lee, Paul H. aut Cheung, S. F. aut Lau, Esther Y. Y. aut Mok, Doris S. Y. aut Hui, Harry C. aut Enthalten in Computational statistics Springer Berlin Heidelberg, 1992 31(2015), 2 vom: 13. Juni, Seite 799-827 (DE-627)131054694 (DE-600)1104678-8 (DE-576)028053559 0943-4062 nnns volume:31 year:2015 number:2 day:13 month:06 pages:799-827 https://doi.org/10.1007/s00180-015-0588-4 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OPC-MAT GBV_ILN_70 GBV_ILN_267 GBV_ILN_2018 GBV_ILN_2088 GBV_ILN_4305 AR 31 2015 2 13 06 799-827 |
allfieldsSound |
10.1007/s00180-015-0588-4 doi (DE-627)OLC2070884554 (DE-He213)s00180-015-0588-4-p DE-627 ger DE-627 rakwb eng 510 004 VZ Yu, Philip L. H. verfasserin aut Logit tree models for discrete choice data with application to advice-seeking preferences among Chinese Christians 2015 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer-Verlag Berlin Heidelberg 2015 Abstract Logit models are popular tools for analyzing discrete choice and ranking data. The models assume that judges rate each item with a measurable utility, and the ordering of a judge’s utilities determines the outcome. Logit models have been proven to be powerful tools, but they become difficult to interpret if the models contain nonlinear and interaction terms. We extended the logit models by adding a decision tree structure to overcome this difficulty. We introduced a new method of tree splitting variable selection that distinguishes the nonlinear and linear effects, and the variable with the strongest nonlinear effect will be selected in the view that linear effect is best modeled using the logit model. Decision trees built in this fashion were shown to have smaller sizes than those using loglikelihood-based splitting criteria. In addition, the proposed splitting methods could save computational time and avoid bias in choosing the optimal splitting variable. Issues on variable selection in logit models are also investigated, and forward selection criterion was shown to work well with logit tree models. Focused on ranking data, simulations are carried out and the results showed that our proposed splitting methods are unbiased. Finally, to demonstrate the feasibility of the logit tree models, they were applied to analyze two datasets, one with binary outcome and the other with ranking outcome. Binary data Decision tree Multinomial data Ranking data Variable selection Lee, Paul H. aut Cheung, S. F. aut Lau, Esther Y. Y. aut Mok, Doris S. Y. aut Hui, Harry C. aut Enthalten in Computational statistics Springer Berlin Heidelberg, 1992 31(2015), 2 vom: 13. Juni, Seite 799-827 (DE-627)131054694 (DE-600)1104678-8 (DE-576)028053559 0943-4062 nnns volume:31 year:2015 number:2 day:13 month:06 pages:799-827 https://doi.org/10.1007/s00180-015-0588-4 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OPC-MAT GBV_ILN_70 GBV_ILN_267 GBV_ILN_2018 GBV_ILN_2088 GBV_ILN_4305 AR 31 2015 2 13 06 799-827 |
language |
English |
source |
Enthalten in Computational statistics 31(2015), 2 vom: 13. Juni, Seite 799-827 volume:31 year:2015 number:2 day:13 month:06 pages:799-827 |
sourceStr |
Enthalten in Computational statistics 31(2015), 2 vom: 13. Juni, Seite 799-827 volume:31 year:2015 number:2 day:13 month:06 pages:799-827 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Binary data Decision tree Multinomial data Ranking data Variable selection |
dewey-raw |
510 |
isfreeaccess_bool |
false |
container_title |
Computational statistics |
authorswithroles_txt_mv |
Yu, Philip L. H. @@aut@@ Lee, Paul H. @@aut@@ Cheung, S. F. @@aut@@ Lau, Esther Y. Y. @@aut@@ Mok, Doris S. Y. @@aut@@ Hui, Harry C. @@aut@@ |
publishDateDaySort_date |
2015-06-13T00:00:00Z |
hierarchy_top_id |
131054694 |
dewey-sort |
3510 |
id |
OLC2070884554 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2070884554</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230323142611.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200820s2015 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s00180-015-0588-4</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2070884554</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s00180-015-0588-4-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">510</subfield><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Yu, Philip L. H.</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Logit tree models for discrete choice data with application to advice-seeking preferences among Chinese Christians</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2015</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer-Verlag Berlin Heidelberg 2015</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Logit models are popular tools for analyzing discrete choice and ranking data. The models assume that judges rate each item with a measurable utility, and the ordering of a judge’s utilities determines the outcome. Logit models have been proven to be powerful tools, but they become difficult to interpret if the models contain nonlinear and interaction terms. We extended the logit models by adding a decision tree structure to overcome this difficulty. We introduced a new method of tree splitting variable selection that distinguishes the nonlinear and linear effects, and the variable with the strongest nonlinear effect will be selected in the view that linear effect is best modeled using the logit model. Decision trees built in this fashion were shown to have smaller sizes than those using loglikelihood-based splitting criteria. In addition, the proposed splitting methods could save computational time and avoid bias in choosing the optimal splitting variable. Issues on variable selection in logit models are also investigated, and forward selection criterion was shown to work well with logit tree models. Focused on ranking data, simulations are carried out and the results showed that our proposed splitting methods are unbiased. Finally, to demonstrate the feasibility of the logit tree models, they were applied to analyze two datasets, one with binary outcome and the other with ranking outcome.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Binary data</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Decision tree</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multinomial data</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Ranking data</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Variable selection</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lee, Paul H.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Cheung, S. F.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lau, Esther Y. Y.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Mok, Doris S. Y.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Hui, Harry C.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Computational statistics</subfield><subfield code="d">Springer Berlin Heidelberg, 1992</subfield><subfield code="g">31(2015), 2 vom: 13. Juni, Seite 799-827</subfield><subfield code="w">(DE-627)131054694</subfield><subfield code="w">(DE-600)1104678-8</subfield><subfield code="w">(DE-576)028053559</subfield><subfield code="x">0943-4062</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:31</subfield><subfield code="g">year:2015</subfield><subfield code="g">number:2</subfield><subfield code="g">day:13</subfield><subfield code="g">month:06</subfield><subfield code="g">pages:799-827</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s00180-015-0588-4</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OPC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_267</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2018</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">31</subfield><subfield code="j">2015</subfield><subfield code="e">2</subfield><subfield code="b">13</subfield><subfield code="c">06</subfield><subfield code="h">799-827</subfield></datafield></record></collection>
|
author |
Yu, Philip L. H. |
spellingShingle |
Yu, Philip L. H. ddc 510 misc Binary data misc Decision tree misc Multinomial data misc Ranking data misc Variable selection Logit tree models for discrete choice data with application to advice-seeking preferences among Chinese Christians |
authorStr |
Yu, Philip L. H. |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)131054694 |
format |
Article |
dewey-ones |
510 - Mathematics 004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
0943-4062 |
topic_title |
510 004 VZ Logit tree models for discrete choice data with application to advice-seeking preferences among Chinese Christians Binary data Decision tree Multinomial data Ranking data Variable selection |
topic |
ddc 510 misc Binary data misc Decision tree misc Multinomial data misc Ranking data misc Variable selection |
topic_unstemmed |
ddc 510 misc Binary data misc Decision tree misc Multinomial data misc Ranking data misc Variable selection |
topic_browse |
ddc 510 misc Binary data misc Decision tree misc Multinomial data misc Ranking data misc Variable selection |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
Computational statistics |
hierarchy_parent_id |
131054694 |
dewey-tens |
510 - Mathematics 000 - Computer science, knowledge & systems |
hierarchy_top_title |
Computational statistics |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)131054694 (DE-600)1104678-8 (DE-576)028053559 |
title |
Logit tree models for discrete choice data with application to advice-seeking preferences among Chinese Christians |
ctrlnum |
(DE-627)OLC2070884554 (DE-He213)s00180-015-0588-4-p |
title_full |
Logit tree models for discrete choice data with application to advice-seeking preferences among Chinese Christians |
author_sort |
Yu, Philip L. H. |
journal |
Computational statistics |
journalStr |
Computational statistics |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
500 - Science 000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2015 |
contenttype_str_mv |
txt |
container_start_page |
799 |
author_browse |
Yu, Philip L. H. Lee, Paul H. Cheung, S. F. Lau, Esther Y. Y. Mok, Doris S. Y. Hui, Harry C. |
container_volume |
31 |
class |
510 004 VZ |
format_se |
Aufsätze |
author-letter |
Yu, Philip L. H. |
doi_str_mv |
10.1007/s00180-015-0588-4 |
dewey-full |
510 004 |
title_sort |
logit tree models for discrete choice data with application to advice-seeking preferences among chinese christians |
title_auth |
Logit tree models for discrete choice data with application to advice-seeking preferences among Chinese Christians |
abstract |
Abstract Logit models are popular tools for analyzing discrete choice and ranking data. The models assume that judges rate each item with a measurable utility, and the ordering of a judge’s utilities determines the outcome. Logit models have been proven to be powerful tools, but they become difficult to interpret if the models contain nonlinear and interaction terms. We extended the logit models by adding a decision tree structure to overcome this difficulty. We introduced a new method of tree splitting variable selection that distinguishes the nonlinear and linear effects, and the variable with the strongest nonlinear effect will be selected in the view that linear effect is best modeled using the logit model. Decision trees built in this fashion were shown to have smaller sizes than those using loglikelihood-based splitting criteria. In addition, the proposed splitting methods could save computational time and avoid bias in choosing the optimal splitting variable. Issues on variable selection in logit models are also investigated, and forward selection criterion was shown to work well with logit tree models. Focused on ranking data, simulations are carried out and the results showed that our proposed splitting methods are unbiased. Finally, to demonstrate the feasibility of the logit tree models, they were applied to analyze two datasets, one with binary outcome and the other with ranking outcome. © Springer-Verlag Berlin Heidelberg 2015 |
abstractGer |
Abstract Logit models are popular tools for analyzing discrete choice and ranking data. The models assume that judges rate each item with a measurable utility, and the ordering of a judge’s utilities determines the outcome. Logit models have been proven to be powerful tools, but they become difficult to interpret if the models contain nonlinear and interaction terms. We extended the logit models by adding a decision tree structure to overcome this difficulty. We introduced a new method of tree splitting variable selection that distinguishes the nonlinear and linear effects, and the variable with the strongest nonlinear effect will be selected in the view that linear effect is best modeled using the logit model. Decision trees built in this fashion were shown to have smaller sizes than those using loglikelihood-based splitting criteria. In addition, the proposed splitting methods could save computational time and avoid bias in choosing the optimal splitting variable. Issues on variable selection in logit models are also investigated, and forward selection criterion was shown to work well with logit tree models. Focused on ranking data, simulations are carried out and the results showed that our proposed splitting methods are unbiased. Finally, to demonstrate the feasibility of the logit tree models, they were applied to analyze two datasets, one with binary outcome and the other with ranking outcome. © Springer-Verlag Berlin Heidelberg 2015 |
abstract_unstemmed |
Abstract Logit models are popular tools for analyzing discrete choice and ranking data. The models assume that judges rate each item with a measurable utility, and the ordering of a judge’s utilities determines the outcome. Logit models have been proven to be powerful tools, but they become difficult to interpret if the models contain nonlinear and interaction terms. We extended the logit models by adding a decision tree structure to overcome this difficulty. We introduced a new method of tree splitting variable selection that distinguishes the nonlinear and linear effects, and the variable with the strongest nonlinear effect will be selected in the view that linear effect is best modeled using the logit model. Decision trees built in this fashion were shown to have smaller sizes than those using loglikelihood-based splitting criteria. In addition, the proposed splitting methods could save computational time and avoid bias in choosing the optimal splitting variable. Issues on variable selection in logit models are also investigated, and forward selection criterion was shown to work well with logit tree models. Focused on ranking data, simulations are carried out and the results showed that our proposed splitting methods are unbiased. Finally, to demonstrate the feasibility of the logit tree models, they were applied to analyze two datasets, one with binary outcome and the other with ranking outcome. © Springer-Verlag Berlin Heidelberg 2015 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OPC-MAT GBV_ILN_70 GBV_ILN_267 GBV_ILN_2018 GBV_ILN_2088 GBV_ILN_4305 |
container_issue |
2 |
title_short |
Logit tree models for discrete choice data with application to advice-seeking preferences among Chinese Christians |
url |
https://doi.org/10.1007/s00180-015-0588-4 |
remote_bool |
false |
author2 |
Lee, Paul H. Cheung, S. F. Lau, Esther Y. Y. Mok, Doris S. Y. Hui, Harry C. |
author2Str |
Lee, Paul H. Cheung, S. F. Lau, Esther Y. Y. Mok, Doris S. Y. Hui, Harry C. |
ppnlink |
131054694 |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s00180-015-0588-4 |
up_date |
2024-07-04T02:30:30.614Z |
_version_ |
1803613876380499968 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2070884554</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230323142611.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200820s2015 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s00180-015-0588-4</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2070884554</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s00180-015-0588-4-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">510</subfield><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Yu, Philip L. H.</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Logit tree models for discrete choice data with application to advice-seeking preferences among Chinese Christians</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2015</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer-Verlag Berlin Heidelberg 2015</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Logit models are popular tools for analyzing discrete choice and ranking data. The models assume that judges rate each item with a measurable utility, and the ordering of a judge’s utilities determines the outcome. Logit models have been proven to be powerful tools, but they become difficult to interpret if the models contain nonlinear and interaction terms. We extended the logit models by adding a decision tree structure to overcome this difficulty. We introduced a new method of tree splitting variable selection that distinguishes the nonlinear and linear effects, and the variable with the strongest nonlinear effect will be selected in the view that linear effect is best modeled using the logit model. Decision trees built in this fashion were shown to have smaller sizes than those using loglikelihood-based splitting criteria. In addition, the proposed splitting methods could save computational time and avoid bias in choosing the optimal splitting variable. Issues on variable selection in logit models are also investigated, and forward selection criterion was shown to work well with logit tree models. Focused on ranking data, simulations are carried out and the results showed that our proposed splitting methods are unbiased. Finally, to demonstrate the feasibility of the logit tree models, they were applied to analyze two datasets, one with binary outcome and the other with ranking outcome.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Binary data</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Decision tree</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multinomial data</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Ranking data</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Variable selection</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lee, Paul H.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Cheung, S. F.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lau, Esther Y. Y.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Mok, Doris S. Y.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Hui, Harry C.</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Computational statistics</subfield><subfield code="d">Springer Berlin Heidelberg, 1992</subfield><subfield code="g">31(2015), 2 vom: 13. Juni, Seite 799-827</subfield><subfield code="w">(DE-627)131054694</subfield><subfield code="w">(DE-600)1104678-8</subfield><subfield code="w">(DE-576)028053559</subfield><subfield code="x">0943-4062</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:31</subfield><subfield code="g">year:2015</subfield><subfield code="g">number:2</subfield><subfield code="g">day:13</subfield><subfield code="g">month:06</subfield><subfield code="g">pages:799-827</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s00180-015-0588-4</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OPC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_267</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2018</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">31</subfield><subfield code="j">2015</subfield><subfield code="e">2</subfield><subfield code="b">13</subfield><subfield code="c">06</subfield><subfield code="h">799-827</subfield></datafield></record></collection>
|
score |
7.400649 |