A taxonomy of weight learning methods for statistical relational learning
Abstract Statistical relational learning (SRL) frameworks are effective at defining probabilistic models over complex relational data. They often use weighted first-order logical rules where the weights of the rules govern probabilistic interactions and are usually learned from data. Existing weight...
Ausführliche Beschreibung
Autor*in: |
Srinivasan, Sriram [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2021 |
---|
Schlagwörter: |
Statistical relational learning |
---|
Anmerkung: |
© The Author(s) 2021 |
---|
Übergeordnetes Werk: |
Enthalten in: Machine learning - Springer US, 1986, 111(2021), 8 vom: 13. Dez., Seite 2799-2838 |
---|---|
Übergeordnetes Werk: |
volume:111 ; year:2021 ; number:8 ; day:13 ; month:12 ; pages:2799-2838 |
Links: |
---|
DOI / URN: |
10.1007/s10994-021-06069-5 |
---|
Katalog-ID: |
OLC2079265784 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | OLC2079265784 | ||
003 | DE-627 | ||
005 | 20230506050755.0 | ||
007 | tu | ||
008 | 221220s2021 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s10994-021-06069-5 |2 doi | |
035 | |a (DE-627)OLC2079265784 | ||
035 | |a (DE-He213)s10994-021-06069-5-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 150 |a 004 |q VZ |
100 | 1 | |a Srinivasan, Sriram |e verfasserin |4 aut | |
245 | 1 | 0 | |a A taxonomy of weight learning methods for statistical relational learning |
264 | 1 | |c 2021 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © The Author(s) 2021 | ||
520 | |a Abstract Statistical relational learning (SRL) frameworks are effective at defining probabilistic models over complex relational data. They often use weighted first-order logical rules where the weights of the rules govern probabilistic interactions and are usually learned from data. Existing weight learning approaches typically attempt to learn a set of weights that maximizes some function of data likelihood; however, this does not always translate to optimal performance on a desired domain metric, such as accuracy or F1 score. In this paper, we introduce a taxonomy of search-based weight learning approaches for SRL frameworks that directly optimize weights on a chosen domain performance metric. To effectively apply these search-based approaches, we introduce a novel projection, referred to as scaled space (SS), that is an accurate representation of the true weight space. We show that SS removes redundancies in the weight space and captures the semantic distance between the possible weight configurations. In order to improve the efficiency of search, we also introduce an approximation of SS which simplifies the process of sampling weight configurations. We demonstrate these approaches on two state-of-the-art SRL frameworks: Markov logic networks and probabilistic soft logic. We perform empirical evaluation on five real-world datasets and evaluate them each on two different metrics. We also compare them against four other weight learning approaches. Our experimental results show that our proposed search-based approaches outperform likelihood-based approaches and yield up to a 10% improvement across a variety of performance metrics. Further, we perform an extensive evaluation to measure the robustness of our approach to different initializations and hyperparameters. The results indicate that our approach is both accurate and robust. | ||
650 | 4 | |a Statistical relational learning | |
650 | 4 | |a Weight learning | |
650 | 4 | |a Probabilistic graphical models | |
650 | 4 | |a Markov logic networks | |
650 | 4 | |a Probabilistic soft logic | |
650 | 4 | |a Black-box optimization | |
700 | 1 | |a Dickens, Charles |4 aut | |
700 | 1 | |a Augustine, Eriq |4 aut | |
700 | 1 | |a Farnadi, Golnoosh |4 aut | |
700 | 1 | |a Getoor, Lise |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Machine learning |d Springer US, 1986 |g 111(2021), 8 vom: 13. Dez., Seite 2799-2838 |w (DE-627)12920403X |w (DE-600)54638-0 |w (DE-576)014457377 |x 0885-6125 |7 nnns |
773 | 1 | 8 | |g volume:111 |g year:2021 |g number:8 |g day:13 |g month:12 |g pages:2799-2838 |
856 | 4 | 1 | |u https://doi.org/10.1007/s10994-021-06069-5 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-MAT | ||
951 | |a AR | ||
952 | |d 111 |j 2021 |e 8 |b 13 |c 12 |h 2799-2838 |
author_variant |
s s ss c d cd e a ea g f gf l g lg |
---|---|
matchkey_str |
article:08856125:2021----::txnmowihlannmtososaitcle |
hierarchy_sort_str |
2021 |
publishDate |
2021 |
allfields |
10.1007/s10994-021-06069-5 doi (DE-627)OLC2079265784 (DE-He213)s10994-021-06069-5-p DE-627 ger DE-627 rakwb eng 150 004 VZ Srinivasan, Sriram verfasserin aut A taxonomy of weight learning methods for statistical relational learning 2021 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s) 2021 Abstract Statistical relational learning (SRL) frameworks are effective at defining probabilistic models over complex relational data. They often use weighted first-order logical rules where the weights of the rules govern probabilistic interactions and are usually learned from data. Existing weight learning approaches typically attempt to learn a set of weights that maximizes some function of data likelihood; however, this does not always translate to optimal performance on a desired domain metric, such as accuracy or F1 score. In this paper, we introduce a taxonomy of search-based weight learning approaches for SRL frameworks that directly optimize weights on a chosen domain performance metric. To effectively apply these search-based approaches, we introduce a novel projection, referred to as scaled space (SS), that is an accurate representation of the true weight space. We show that SS removes redundancies in the weight space and captures the semantic distance between the possible weight configurations. In order to improve the efficiency of search, we also introduce an approximation of SS which simplifies the process of sampling weight configurations. We demonstrate these approaches on two state-of-the-art SRL frameworks: Markov logic networks and probabilistic soft logic. We perform empirical evaluation on five real-world datasets and evaluate them each on two different metrics. We also compare them against four other weight learning approaches. Our experimental results show that our proposed search-based approaches outperform likelihood-based approaches and yield up to a 10% improvement across a variety of performance metrics. Further, we perform an extensive evaluation to measure the robustness of our approach to different initializations and hyperparameters. The results indicate that our approach is both accurate and robust. Statistical relational learning Weight learning Probabilistic graphical models Markov logic networks Probabilistic soft logic Black-box optimization Dickens, Charles aut Augustine, Eriq aut Farnadi, Golnoosh aut Getoor, Lise aut Enthalten in Machine learning Springer US, 1986 111(2021), 8 vom: 13. Dez., Seite 2799-2838 (DE-627)12920403X (DE-600)54638-0 (DE-576)014457377 0885-6125 nnns volume:111 year:2021 number:8 day:13 month:12 pages:2799-2838 https://doi.org/10.1007/s10994-021-06069-5 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 111 2021 8 13 12 2799-2838 |
spelling |
10.1007/s10994-021-06069-5 doi (DE-627)OLC2079265784 (DE-He213)s10994-021-06069-5-p DE-627 ger DE-627 rakwb eng 150 004 VZ Srinivasan, Sriram verfasserin aut A taxonomy of weight learning methods for statistical relational learning 2021 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s) 2021 Abstract Statistical relational learning (SRL) frameworks are effective at defining probabilistic models over complex relational data. They often use weighted first-order logical rules where the weights of the rules govern probabilistic interactions and are usually learned from data. Existing weight learning approaches typically attempt to learn a set of weights that maximizes some function of data likelihood; however, this does not always translate to optimal performance on a desired domain metric, such as accuracy or F1 score. In this paper, we introduce a taxonomy of search-based weight learning approaches for SRL frameworks that directly optimize weights on a chosen domain performance metric. To effectively apply these search-based approaches, we introduce a novel projection, referred to as scaled space (SS), that is an accurate representation of the true weight space. We show that SS removes redundancies in the weight space and captures the semantic distance between the possible weight configurations. In order to improve the efficiency of search, we also introduce an approximation of SS which simplifies the process of sampling weight configurations. We demonstrate these approaches on two state-of-the-art SRL frameworks: Markov logic networks and probabilistic soft logic. We perform empirical evaluation on five real-world datasets and evaluate them each on two different metrics. We also compare them against four other weight learning approaches. Our experimental results show that our proposed search-based approaches outperform likelihood-based approaches and yield up to a 10% improvement across a variety of performance metrics. Further, we perform an extensive evaluation to measure the robustness of our approach to different initializations and hyperparameters. The results indicate that our approach is both accurate and robust. Statistical relational learning Weight learning Probabilistic graphical models Markov logic networks Probabilistic soft logic Black-box optimization Dickens, Charles aut Augustine, Eriq aut Farnadi, Golnoosh aut Getoor, Lise aut Enthalten in Machine learning Springer US, 1986 111(2021), 8 vom: 13. Dez., Seite 2799-2838 (DE-627)12920403X (DE-600)54638-0 (DE-576)014457377 0885-6125 nnns volume:111 year:2021 number:8 day:13 month:12 pages:2799-2838 https://doi.org/10.1007/s10994-021-06069-5 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 111 2021 8 13 12 2799-2838 |
allfields_unstemmed |
10.1007/s10994-021-06069-5 doi (DE-627)OLC2079265784 (DE-He213)s10994-021-06069-5-p DE-627 ger DE-627 rakwb eng 150 004 VZ Srinivasan, Sriram verfasserin aut A taxonomy of weight learning methods for statistical relational learning 2021 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s) 2021 Abstract Statistical relational learning (SRL) frameworks are effective at defining probabilistic models over complex relational data. They often use weighted first-order logical rules where the weights of the rules govern probabilistic interactions and are usually learned from data. Existing weight learning approaches typically attempt to learn a set of weights that maximizes some function of data likelihood; however, this does not always translate to optimal performance on a desired domain metric, such as accuracy or F1 score. In this paper, we introduce a taxonomy of search-based weight learning approaches for SRL frameworks that directly optimize weights on a chosen domain performance metric. To effectively apply these search-based approaches, we introduce a novel projection, referred to as scaled space (SS), that is an accurate representation of the true weight space. We show that SS removes redundancies in the weight space and captures the semantic distance between the possible weight configurations. In order to improve the efficiency of search, we also introduce an approximation of SS which simplifies the process of sampling weight configurations. We demonstrate these approaches on two state-of-the-art SRL frameworks: Markov logic networks and probabilistic soft logic. We perform empirical evaluation on five real-world datasets and evaluate them each on two different metrics. We also compare them against four other weight learning approaches. Our experimental results show that our proposed search-based approaches outperform likelihood-based approaches and yield up to a 10% improvement across a variety of performance metrics. Further, we perform an extensive evaluation to measure the robustness of our approach to different initializations and hyperparameters. The results indicate that our approach is both accurate and robust. Statistical relational learning Weight learning Probabilistic graphical models Markov logic networks Probabilistic soft logic Black-box optimization Dickens, Charles aut Augustine, Eriq aut Farnadi, Golnoosh aut Getoor, Lise aut Enthalten in Machine learning Springer US, 1986 111(2021), 8 vom: 13. Dez., Seite 2799-2838 (DE-627)12920403X (DE-600)54638-0 (DE-576)014457377 0885-6125 nnns volume:111 year:2021 number:8 day:13 month:12 pages:2799-2838 https://doi.org/10.1007/s10994-021-06069-5 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 111 2021 8 13 12 2799-2838 |
allfieldsGer |
10.1007/s10994-021-06069-5 doi (DE-627)OLC2079265784 (DE-He213)s10994-021-06069-5-p DE-627 ger DE-627 rakwb eng 150 004 VZ Srinivasan, Sriram verfasserin aut A taxonomy of weight learning methods for statistical relational learning 2021 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s) 2021 Abstract Statistical relational learning (SRL) frameworks are effective at defining probabilistic models over complex relational data. They often use weighted first-order logical rules where the weights of the rules govern probabilistic interactions and are usually learned from data. Existing weight learning approaches typically attempt to learn a set of weights that maximizes some function of data likelihood; however, this does not always translate to optimal performance on a desired domain metric, such as accuracy or F1 score. In this paper, we introduce a taxonomy of search-based weight learning approaches for SRL frameworks that directly optimize weights on a chosen domain performance metric. To effectively apply these search-based approaches, we introduce a novel projection, referred to as scaled space (SS), that is an accurate representation of the true weight space. We show that SS removes redundancies in the weight space and captures the semantic distance between the possible weight configurations. In order to improve the efficiency of search, we also introduce an approximation of SS which simplifies the process of sampling weight configurations. We demonstrate these approaches on two state-of-the-art SRL frameworks: Markov logic networks and probabilistic soft logic. We perform empirical evaluation on five real-world datasets and evaluate them each on two different metrics. We also compare them against four other weight learning approaches. Our experimental results show that our proposed search-based approaches outperform likelihood-based approaches and yield up to a 10% improvement across a variety of performance metrics. Further, we perform an extensive evaluation to measure the robustness of our approach to different initializations and hyperparameters. The results indicate that our approach is both accurate and robust. Statistical relational learning Weight learning Probabilistic graphical models Markov logic networks Probabilistic soft logic Black-box optimization Dickens, Charles aut Augustine, Eriq aut Farnadi, Golnoosh aut Getoor, Lise aut Enthalten in Machine learning Springer US, 1986 111(2021), 8 vom: 13. Dez., Seite 2799-2838 (DE-627)12920403X (DE-600)54638-0 (DE-576)014457377 0885-6125 nnns volume:111 year:2021 number:8 day:13 month:12 pages:2799-2838 https://doi.org/10.1007/s10994-021-06069-5 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 111 2021 8 13 12 2799-2838 |
allfieldsSound |
10.1007/s10994-021-06069-5 doi (DE-627)OLC2079265784 (DE-He213)s10994-021-06069-5-p DE-627 ger DE-627 rakwb eng 150 004 VZ Srinivasan, Sriram verfasserin aut A taxonomy of weight learning methods for statistical relational learning 2021 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s) 2021 Abstract Statistical relational learning (SRL) frameworks are effective at defining probabilistic models over complex relational data. They often use weighted first-order logical rules where the weights of the rules govern probabilistic interactions and are usually learned from data. Existing weight learning approaches typically attempt to learn a set of weights that maximizes some function of data likelihood; however, this does not always translate to optimal performance on a desired domain metric, such as accuracy or F1 score. In this paper, we introduce a taxonomy of search-based weight learning approaches for SRL frameworks that directly optimize weights on a chosen domain performance metric. To effectively apply these search-based approaches, we introduce a novel projection, referred to as scaled space (SS), that is an accurate representation of the true weight space. We show that SS removes redundancies in the weight space and captures the semantic distance between the possible weight configurations. In order to improve the efficiency of search, we also introduce an approximation of SS which simplifies the process of sampling weight configurations. We demonstrate these approaches on two state-of-the-art SRL frameworks: Markov logic networks and probabilistic soft logic. We perform empirical evaluation on five real-world datasets and evaluate them each on two different metrics. We also compare them against four other weight learning approaches. Our experimental results show that our proposed search-based approaches outperform likelihood-based approaches and yield up to a 10% improvement across a variety of performance metrics. Further, we perform an extensive evaluation to measure the robustness of our approach to different initializations and hyperparameters. The results indicate that our approach is both accurate and robust. Statistical relational learning Weight learning Probabilistic graphical models Markov logic networks Probabilistic soft logic Black-box optimization Dickens, Charles aut Augustine, Eriq aut Farnadi, Golnoosh aut Getoor, Lise aut Enthalten in Machine learning Springer US, 1986 111(2021), 8 vom: 13. Dez., Seite 2799-2838 (DE-627)12920403X (DE-600)54638-0 (DE-576)014457377 0885-6125 nnns volume:111 year:2021 number:8 day:13 month:12 pages:2799-2838 https://doi.org/10.1007/s10994-021-06069-5 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 111 2021 8 13 12 2799-2838 |
language |
English |
source |
Enthalten in Machine learning 111(2021), 8 vom: 13. Dez., Seite 2799-2838 volume:111 year:2021 number:8 day:13 month:12 pages:2799-2838 |
sourceStr |
Enthalten in Machine learning 111(2021), 8 vom: 13. Dez., Seite 2799-2838 volume:111 year:2021 number:8 day:13 month:12 pages:2799-2838 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Statistical relational learning Weight learning Probabilistic graphical models Markov logic networks Probabilistic soft logic Black-box optimization |
dewey-raw |
150 |
isfreeaccess_bool |
false |
container_title |
Machine learning |
authorswithroles_txt_mv |
Srinivasan, Sriram @@aut@@ Dickens, Charles @@aut@@ Augustine, Eriq @@aut@@ Farnadi, Golnoosh @@aut@@ Getoor, Lise @@aut@@ |
publishDateDaySort_date |
2021-12-13T00:00:00Z |
hierarchy_top_id |
12920403X |
dewey-sort |
3150 |
id |
OLC2079265784 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2079265784</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230506050755.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">221220s2021 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s10994-021-06069-5</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2079265784</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s10994-021-06069-5-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">150</subfield><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Srinivasan, Sriram</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">A taxonomy of weight learning methods for statistical relational learning</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2021</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s) 2021</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Statistical relational learning (SRL) frameworks are effective at defining probabilistic models over complex relational data. They often use weighted first-order logical rules where the weights of the rules govern probabilistic interactions and are usually learned from data. Existing weight learning approaches typically attempt to learn a set of weights that maximizes some function of data likelihood; however, this does not always translate to optimal performance on a desired domain metric, such as accuracy or F1 score. In this paper, we introduce a taxonomy of search-based weight learning approaches for SRL frameworks that directly optimize weights on a chosen domain performance metric. To effectively apply these search-based approaches, we introduce a novel projection, referred to as scaled space (SS), that is an accurate representation of the true weight space. We show that SS removes redundancies in the weight space and captures the semantic distance between the possible weight configurations. In order to improve the efficiency of search, we also introduce an approximation of SS which simplifies the process of sampling weight configurations. We demonstrate these approaches on two state-of-the-art SRL frameworks: Markov logic networks and probabilistic soft logic. We perform empirical evaluation on five real-world datasets and evaluate them each on two different metrics. We also compare them against four other weight learning approaches. Our experimental results show that our proposed search-based approaches outperform likelihood-based approaches and yield up to a 10% improvement across a variety of performance metrics. Further, we perform an extensive evaluation to measure the robustness of our approach to different initializations and hyperparameters. The results indicate that our approach is both accurate and robust.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Statistical relational learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Weight learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Probabilistic graphical models</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Markov logic networks</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Probabilistic soft logic</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Black-box optimization</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Dickens, Charles</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Augustine, Eriq</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Farnadi, Golnoosh</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Getoor, Lise</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Machine learning</subfield><subfield code="d">Springer US, 1986</subfield><subfield code="g">111(2021), 8 vom: 13. Dez., Seite 2799-2838</subfield><subfield code="w">(DE-627)12920403X</subfield><subfield code="w">(DE-600)54638-0</subfield><subfield code="w">(DE-576)014457377</subfield><subfield code="x">0885-6125</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:111</subfield><subfield code="g">year:2021</subfield><subfield code="g">number:8</subfield><subfield code="g">day:13</subfield><subfield code="g">month:12</subfield><subfield code="g">pages:2799-2838</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s10994-021-06069-5</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">111</subfield><subfield code="j">2021</subfield><subfield code="e">8</subfield><subfield code="b">13</subfield><subfield code="c">12</subfield><subfield code="h">2799-2838</subfield></datafield></record></collection>
|
author |
Srinivasan, Sriram |
spellingShingle |
Srinivasan, Sriram ddc 150 misc Statistical relational learning misc Weight learning misc Probabilistic graphical models misc Markov logic networks misc Probabilistic soft logic misc Black-box optimization A taxonomy of weight learning methods for statistical relational learning |
authorStr |
Srinivasan, Sriram |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)12920403X |
format |
Article |
dewey-ones |
150 - Psychology 004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
0885-6125 |
topic_title |
150 004 VZ A taxonomy of weight learning methods for statistical relational learning Statistical relational learning Weight learning Probabilistic graphical models Markov logic networks Probabilistic soft logic Black-box optimization |
topic |
ddc 150 misc Statistical relational learning misc Weight learning misc Probabilistic graphical models misc Markov logic networks misc Probabilistic soft logic misc Black-box optimization |
topic_unstemmed |
ddc 150 misc Statistical relational learning misc Weight learning misc Probabilistic graphical models misc Markov logic networks misc Probabilistic soft logic misc Black-box optimization |
topic_browse |
ddc 150 misc Statistical relational learning misc Weight learning misc Probabilistic graphical models misc Markov logic networks misc Probabilistic soft logic misc Black-box optimization |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
Machine learning |
hierarchy_parent_id |
12920403X |
dewey-tens |
150 - Psychology 000 - Computer science, knowledge & systems |
hierarchy_top_title |
Machine learning |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)12920403X (DE-600)54638-0 (DE-576)014457377 |
title |
A taxonomy of weight learning methods for statistical relational learning |
ctrlnum |
(DE-627)OLC2079265784 (DE-He213)s10994-021-06069-5-p |
title_full |
A taxonomy of weight learning methods for statistical relational learning |
author_sort |
Srinivasan, Sriram |
journal |
Machine learning |
journalStr |
Machine learning |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
100 - Philosophy & psychology 000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2021 |
contenttype_str_mv |
txt |
container_start_page |
2799 |
author_browse |
Srinivasan, Sriram Dickens, Charles Augustine, Eriq Farnadi, Golnoosh Getoor, Lise |
container_volume |
111 |
class |
150 004 VZ |
format_se |
Aufsätze |
author-letter |
Srinivasan, Sriram |
doi_str_mv |
10.1007/s10994-021-06069-5 |
dewey-full |
150 004 |
title_sort |
a taxonomy of weight learning methods for statistical relational learning |
title_auth |
A taxonomy of weight learning methods for statistical relational learning |
abstract |
Abstract Statistical relational learning (SRL) frameworks are effective at defining probabilistic models over complex relational data. They often use weighted first-order logical rules where the weights of the rules govern probabilistic interactions and are usually learned from data. Existing weight learning approaches typically attempt to learn a set of weights that maximizes some function of data likelihood; however, this does not always translate to optimal performance on a desired domain metric, such as accuracy or F1 score. In this paper, we introduce a taxonomy of search-based weight learning approaches for SRL frameworks that directly optimize weights on a chosen domain performance metric. To effectively apply these search-based approaches, we introduce a novel projection, referred to as scaled space (SS), that is an accurate representation of the true weight space. We show that SS removes redundancies in the weight space and captures the semantic distance between the possible weight configurations. In order to improve the efficiency of search, we also introduce an approximation of SS which simplifies the process of sampling weight configurations. We demonstrate these approaches on two state-of-the-art SRL frameworks: Markov logic networks and probabilistic soft logic. We perform empirical evaluation on five real-world datasets and evaluate them each on two different metrics. We also compare them against four other weight learning approaches. Our experimental results show that our proposed search-based approaches outperform likelihood-based approaches and yield up to a 10% improvement across a variety of performance metrics. Further, we perform an extensive evaluation to measure the robustness of our approach to different initializations and hyperparameters. The results indicate that our approach is both accurate and robust. © The Author(s) 2021 |
abstractGer |
Abstract Statistical relational learning (SRL) frameworks are effective at defining probabilistic models over complex relational data. They often use weighted first-order logical rules where the weights of the rules govern probabilistic interactions and are usually learned from data. Existing weight learning approaches typically attempt to learn a set of weights that maximizes some function of data likelihood; however, this does not always translate to optimal performance on a desired domain metric, such as accuracy or F1 score. In this paper, we introduce a taxonomy of search-based weight learning approaches for SRL frameworks that directly optimize weights on a chosen domain performance metric. To effectively apply these search-based approaches, we introduce a novel projection, referred to as scaled space (SS), that is an accurate representation of the true weight space. We show that SS removes redundancies in the weight space and captures the semantic distance between the possible weight configurations. In order to improve the efficiency of search, we also introduce an approximation of SS which simplifies the process of sampling weight configurations. We demonstrate these approaches on two state-of-the-art SRL frameworks: Markov logic networks and probabilistic soft logic. We perform empirical evaluation on five real-world datasets and evaluate them each on two different metrics. We also compare them against four other weight learning approaches. Our experimental results show that our proposed search-based approaches outperform likelihood-based approaches and yield up to a 10% improvement across a variety of performance metrics. Further, we perform an extensive evaluation to measure the robustness of our approach to different initializations and hyperparameters. The results indicate that our approach is both accurate and robust. © The Author(s) 2021 |
abstract_unstemmed |
Abstract Statistical relational learning (SRL) frameworks are effective at defining probabilistic models over complex relational data. They often use weighted first-order logical rules where the weights of the rules govern probabilistic interactions and are usually learned from data. Existing weight learning approaches typically attempt to learn a set of weights that maximizes some function of data likelihood; however, this does not always translate to optimal performance on a desired domain metric, such as accuracy or F1 score. In this paper, we introduce a taxonomy of search-based weight learning approaches for SRL frameworks that directly optimize weights on a chosen domain performance metric. To effectively apply these search-based approaches, we introduce a novel projection, referred to as scaled space (SS), that is an accurate representation of the true weight space. We show that SS removes redundancies in the weight space and captures the semantic distance between the possible weight configurations. In order to improve the efficiency of search, we also introduce an approximation of SS which simplifies the process of sampling weight configurations. We demonstrate these approaches on two state-of-the-art SRL frameworks: Markov logic networks and probabilistic soft logic. We perform empirical evaluation on five real-world datasets and evaluate them each on two different metrics. We also compare them against four other weight learning approaches. Our experimental results show that our proposed search-based approaches outperform likelihood-based approaches and yield up to a 10% improvement across a variety of performance metrics. Further, we perform an extensive evaluation to measure the robustness of our approach to different initializations and hyperparameters. The results indicate that our approach is both accurate and robust. © The Author(s) 2021 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT |
container_issue |
8 |
title_short |
A taxonomy of weight learning methods for statistical relational learning |
url |
https://doi.org/10.1007/s10994-021-06069-5 |
remote_bool |
false |
author2 |
Dickens, Charles Augustine, Eriq Farnadi, Golnoosh Getoor, Lise |
author2Str |
Dickens, Charles Augustine, Eriq Farnadi, Golnoosh Getoor, Lise |
ppnlink |
12920403X |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s10994-021-06069-5 |
up_date |
2024-07-04T00:13:27.905Z |
_version_ |
1803605254242041856 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2079265784</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230506050755.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">221220s2021 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s10994-021-06069-5</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2079265784</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s10994-021-06069-5-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">150</subfield><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Srinivasan, Sriram</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">A taxonomy of weight learning methods for statistical relational learning</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2021</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s) 2021</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Statistical relational learning (SRL) frameworks are effective at defining probabilistic models over complex relational data. They often use weighted first-order logical rules where the weights of the rules govern probabilistic interactions and are usually learned from data. Existing weight learning approaches typically attempt to learn a set of weights that maximizes some function of data likelihood; however, this does not always translate to optimal performance on a desired domain metric, such as accuracy or F1 score. In this paper, we introduce a taxonomy of search-based weight learning approaches for SRL frameworks that directly optimize weights on a chosen domain performance metric. To effectively apply these search-based approaches, we introduce a novel projection, referred to as scaled space (SS), that is an accurate representation of the true weight space. We show that SS removes redundancies in the weight space and captures the semantic distance between the possible weight configurations. In order to improve the efficiency of search, we also introduce an approximation of SS which simplifies the process of sampling weight configurations. We demonstrate these approaches on two state-of-the-art SRL frameworks: Markov logic networks and probabilistic soft logic. We perform empirical evaluation on five real-world datasets and evaluate them each on two different metrics. We also compare them against four other weight learning approaches. Our experimental results show that our proposed search-based approaches outperform likelihood-based approaches and yield up to a 10% improvement across a variety of performance metrics. Further, we perform an extensive evaluation to measure the robustness of our approach to different initializations and hyperparameters. The results indicate that our approach is both accurate and robust.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Statistical relational learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Weight learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Probabilistic graphical models</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Markov logic networks</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Probabilistic soft logic</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Black-box optimization</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Dickens, Charles</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Augustine, Eriq</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Farnadi, Golnoosh</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Getoor, Lise</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Machine learning</subfield><subfield code="d">Springer US, 1986</subfield><subfield code="g">111(2021), 8 vom: 13. Dez., Seite 2799-2838</subfield><subfield code="w">(DE-627)12920403X</subfield><subfield code="w">(DE-600)54638-0</subfield><subfield code="w">(DE-576)014457377</subfield><subfield code="x">0885-6125</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:111</subfield><subfield code="g">year:2021</subfield><subfield code="g">number:8</subfield><subfield code="g">day:13</subfield><subfield code="g">month:12</subfield><subfield code="g">pages:2799-2838</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s10994-021-06069-5</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">111</subfield><subfield code="j">2021</subfield><subfield code="e">8</subfield><subfield code="b">13</subfield><subfield code="c">12</subfield><subfield code="h">2799-2838</subfield></datafield></record></collection>
|
score |
7.4010725 |