Cross-lingual transfer learning for relation extraction using Universal Dependencies
This paper focuses on the task of cross-language relation extraction, which aims to identify the semantic relations holding between entities in the text. The goal of the task is to train classifiers for low-resource languages by means of the annotated data from high-resource languages. Related metho...
Ausführliche Beschreibung
Autor*in: |
Taghizadeh, Nasrin [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022transfer abstract |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
Enthalten in: Intensive schooling and cognitive ability: A case of Polish educational reform - Karwowski, Maciej ELSEVIER, 2021, London |
---|---|
Übergeordnetes Werk: |
volume:71 ; year:2022 ; pages:0 |
Links: |
---|
DOI / URN: |
10.1016/j.csl.2021.101265 |
---|
Katalog-ID: |
ELV055418643 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV055418643 | ||
003 | DE-627 | ||
005 | 20230626041636.0 | ||
007 | cr uuu---uuuuu | ||
008 | 220105s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.csl.2021.101265 |2 doi | |
028 | 5 | 2 | |a /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001511.pica |
035 | |a (DE-627)ELV055418643 | ||
035 | |a (ELSEVIER)S0885-2308(21)00071-1 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 150 |q VZ |
100 | 1 | |a Taghizadeh, Nasrin |e verfasserin |4 aut | |
245 | 1 | 0 | |a Cross-lingual transfer learning for relation extraction using Universal Dependencies |
264 | 1 | |c 2022transfer abstract | |
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a nicht spezifiziert |b z |2 rdamedia | ||
338 | |a nicht spezifiziert |b zu |2 rdacarrier | ||
520 | |a This paper focuses on the task of cross-language relation extraction, which aims to identify the semantic relations holding between entities in the text. The goal of the task is to train classifiers for low-resource languages by means of the annotated data from high-resource languages. Related methods usually employ parallel data or Machine Translator (MT) to project annotated data from a source to a target language. However, the availability and the quality of parallel data and MT are big challenges for low-resource languages. In this paper, a novel transfer learning method is presented for this task. The key idea is to utilize a tree-based representation of data, which is highly informative for classifying semantic relations, and also shared among different languages. All the training and test data are shown using this representation. We propose to use the Universal Dependency (UD) parsing, which is a language-agnostic formalism for representation of syntactic structures. Equipping UD parse trees with multi-lingual word embeddings makes an ideal representation for the cross-language relation extraction task. We propose two deep networks to use this representation. The first one utilizes the Shortest Dependency Path of UD trees, while the second employs the UD-based positional embeddings. Experiments are performed using SemEval 2010-task 8 training data, whereas French and Farsi are the test languages. The results show 63.9% and 56.2% F 1 scores, for French and Farsi test data, respectively, which are 14.4% and 17.9% higher than the baseline. This work can be considered a simple yet powerful baseline for further investigation into the cross-language tasks. | ||
520 | |a This paper focuses on the task of cross-language relation extraction, which aims to identify the semantic relations holding between entities in the text. The goal of the task is to train classifiers for low-resource languages by means of the annotated data from high-resource languages. Related methods usually employ parallel data or Machine Translator (MT) to project annotated data from a source to a target language. However, the availability and the quality of parallel data and MT are big challenges for low-resource languages. In this paper, a novel transfer learning method is presented for this task. The key idea is to utilize a tree-based representation of data, which is highly informative for classifying semantic relations, and also shared among different languages. All the training and test data are shown using this representation. We propose to use the Universal Dependency (UD) parsing, which is a language-agnostic formalism for representation of syntactic structures. Equipping UD parse trees with multi-lingual word embeddings makes an ideal representation for the cross-language relation extraction task. We propose two deep networks to use this representation. The first one utilizes the Shortest Dependency Path of UD trees, while the second employs the UD-based positional embeddings. Experiments are performed using SemEval 2010-task 8 training data, whereas French and Farsi are the test languages. The results show 63.9% and 56.2% F 1 scores, for French and Farsi test data, respectively, which are 14.4% and 17.9% higher than the baseline. This work can be considered a simple yet powerful baseline for further investigation into the cross-language tasks. | ||
650 | 7 | |a Relation extraction |2 Elsevier | |
650 | 7 | |a Dependency context |2 Elsevier | |
650 | 7 | |a Universal Dependency Parsing |2 Elsevier | |
650 | 7 | |a Tree-based models |2 Elsevier | |
700 | 1 | |a Faili, Heshaam |4 oth | |
773 | 0 | 8 | |i Enthalten in |n Academic Press |a Karwowski, Maciej ELSEVIER |t Intensive schooling and cognitive ability: A case of Polish educational reform |d 2021 |g London |w (DE-627)ELV006450652 |
773 | 1 | 8 | |g volume:71 |g year:2022 |g pages:0 |
856 | 4 | 0 | |u https://doi.org/10.1016/j.csl.2021.101265 |3 Volltext |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SYSFLAG_U | ||
951 | |a AR | ||
952 | |d 71 |j 2022 |h 0 |
author_variant |
n t nt |
---|---|
matchkey_str |
taghizadehnasrinfailiheshaam:2022----:rslnulrnfrerigorltoetatouig |
hierarchy_sort_str |
2022transfer abstract |
publishDate |
2022 |
allfields |
10.1016/j.csl.2021.101265 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001511.pica (DE-627)ELV055418643 (ELSEVIER)S0885-2308(21)00071-1 DE-627 ger DE-627 rakwb eng 150 VZ Taghizadeh, Nasrin verfasserin aut Cross-lingual transfer learning for relation extraction using Universal Dependencies 2022transfer abstract nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier This paper focuses on the task of cross-language relation extraction, which aims to identify the semantic relations holding between entities in the text. The goal of the task is to train classifiers for low-resource languages by means of the annotated data from high-resource languages. Related methods usually employ parallel data or Machine Translator (MT) to project annotated data from a source to a target language. However, the availability and the quality of parallel data and MT are big challenges for low-resource languages. In this paper, a novel transfer learning method is presented for this task. The key idea is to utilize a tree-based representation of data, which is highly informative for classifying semantic relations, and also shared among different languages. All the training and test data are shown using this representation. We propose to use the Universal Dependency (UD) parsing, which is a language-agnostic formalism for representation of syntactic structures. Equipping UD parse trees with multi-lingual word embeddings makes an ideal representation for the cross-language relation extraction task. We propose two deep networks to use this representation. The first one utilizes the Shortest Dependency Path of UD trees, while the second employs the UD-based positional embeddings. Experiments are performed using SemEval 2010-task 8 training data, whereas French and Farsi are the test languages. The results show 63.9% and 56.2% F 1 scores, for French and Farsi test data, respectively, which are 14.4% and 17.9% higher than the baseline. This work can be considered a simple yet powerful baseline for further investigation into the cross-language tasks. This paper focuses on the task of cross-language relation extraction, which aims to identify the semantic relations holding between entities in the text. The goal of the task is to train classifiers for low-resource languages by means of the annotated data from high-resource languages. Related methods usually employ parallel data or Machine Translator (MT) to project annotated data from a source to a target language. However, the availability and the quality of parallel data and MT are big challenges for low-resource languages. In this paper, a novel transfer learning method is presented for this task. The key idea is to utilize a tree-based representation of data, which is highly informative for classifying semantic relations, and also shared among different languages. All the training and test data are shown using this representation. We propose to use the Universal Dependency (UD) parsing, which is a language-agnostic formalism for representation of syntactic structures. Equipping UD parse trees with multi-lingual word embeddings makes an ideal representation for the cross-language relation extraction task. We propose two deep networks to use this representation. The first one utilizes the Shortest Dependency Path of UD trees, while the second employs the UD-based positional embeddings. Experiments are performed using SemEval 2010-task 8 training data, whereas French and Farsi are the test languages. The results show 63.9% and 56.2% F 1 scores, for French and Farsi test data, respectively, which are 14.4% and 17.9% higher than the baseline. This work can be considered a simple yet powerful baseline for further investigation into the cross-language tasks. Relation extraction Elsevier Dependency context Elsevier Universal Dependency Parsing Elsevier Tree-based models Elsevier Faili, Heshaam oth Enthalten in Academic Press Karwowski, Maciej ELSEVIER Intensive schooling and cognitive ability: A case of Polish educational reform 2021 London (DE-627)ELV006450652 volume:71 year:2022 pages:0 https://doi.org/10.1016/j.csl.2021.101265 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U AR 71 2022 0 |
spelling |
10.1016/j.csl.2021.101265 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001511.pica (DE-627)ELV055418643 (ELSEVIER)S0885-2308(21)00071-1 DE-627 ger DE-627 rakwb eng 150 VZ Taghizadeh, Nasrin verfasserin aut Cross-lingual transfer learning for relation extraction using Universal Dependencies 2022transfer abstract nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier This paper focuses on the task of cross-language relation extraction, which aims to identify the semantic relations holding between entities in the text. The goal of the task is to train classifiers for low-resource languages by means of the annotated data from high-resource languages. Related methods usually employ parallel data or Machine Translator (MT) to project annotated data from a source to a target language. However, the availability and the quality of parallel data and MT are big challenges for low-resource languages. In this paper, a novel transfer learning method is presented for this task. The key idea is to utilize a tree-based representation of data, which is highly informative for classifying semantic relations, and also shared among different languages. All the training and test data are shown using this representation. We propose to use the Universal Dependency (UD) parsing, which is a language-agnostic formalism for representation of syntactic structures. Equipping UD parse trees with multi-lingual word embeddings makes an ideal representation for the cross-language relation extraction task. We propose two deep networks to use this representation. The first one utilizes the Shortest Dependency Path of UD trees, while the second employs the UD-based positional embeddings. Experiments are performed using SemEval 2010-task 8 training data, whereas French and Farsi are the test languages. The results show 63.9% and 56.2% F 1 scores, for French and Farsi test data, respectively, which are 14.4% and 17.9% higher than the baseline. This work can be considered a simple yet powerful baseline for further investigation into the cross-language tasks. This paper focuses on the task of cross-language relation extraction, which aims to identify the semantic relations holding between entities in the text. The goal of the task is to train classifiers for low-resource languages by means of the annotated data from high-resource languages. Related methods usually employ parallel data or Machine Translator (MT) to project annotated data from a source to a target language. However, the availability and the quality of parallel data and MT are big challenges for low-resource languages. In this paper, a novel transfer learning method is presented for this task. The key idea is to utilize a tree-based representation of data, which is highly informative for classifying semantic relations, and also shared among different languages. All the training and test data are shown using this representation. We propose to use the Universal Dependency (UD) parsing, which is a language-agnostic formalism for representation of syntactic structures. Equipping UD parse trees with multi-lingual word embeddings makes an ideal representation for the cross-language relation extraction task. We propose two deep networks to use this representation. The first one utilizes the Shortest Dependency Path of UD trees, while the second employs the UD-based positional embeddings. Experiments are performed using SemEval 2010-task 8 training data, whereas French and Farsi are the test languages. The results show 63.9% and 56.2% F 1 scores, for French and Farsi test data, respectively, which are 14.4% and 17.9% higher than the baseline. This work can be considered a simple yet powerful baseline for further investigation into the cross-language tasks. Relation extraction Elsevier Dependency context Elsevier Universal Dependency Parsing Elsevier Tree-based models Elsevier Faili, Heshaam oth Enthalten in Academic Press Karwowski, Maciej ELSEVIER Intensive schooling and cognitive ability: A case of Polish educational reform 2021 London (DE-627)ELV006450652 volume:71 year:2022 pages:0 https://doi.org/10.1016/j.csl.2021.101265 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U AR 71 2022 0 |
allfields_unstemmed |
10.1016/j.csl.2021.101265 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001511.pica (DE-627)ELV055418643 (ELSEVIER)S0885-2308(21)00071-1 DE-627 ger DE-627 rakwb eng 150 VZ Taghizadeh, Nasrin verfasserin aut Cross-lingual transfer learning for relation extraction using Universal Dependencies 2022transfer abstract nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier This paper focuses on the task of cross-language relation extraction, which aims to identify the semantic relations holding between entities in the text. The goal of the task is to train classifiers for low-resource languages by means of the annotated data from high-resource languages. Related methods usually employ parallel data or Machine Translator (MT) to project annotated data from a source to a target language. However, the availability and the quality of parallel data and MT are big challenges for low-resource languages. In this paper, a novel transfer learning method is presented for this task. The key idea is to utilize a tree-based representation of data, which is highly informative for classifying semantic relations, and also shared among different languages. All the training and test data are shown using this representation. We propose to use the Universal Dependency (UD) parsing, which is a language-agnostic formalism for representation of syntactic structures. Equipping UD parse trees with multi-lingual word embeddings makes an ideal representation for the cross-language relation extraction task. We propose two deep networks to use this representation. The first one utilizes the Shortest Dependency Path of UD trees, while the second employs the UD-based positional embeddings. Experiments are performed using SemEval 2010-task 8 training data, whereas French and Farsi are the test languages. The results show 63.9% and 56.2% F 1 scores, for French and Farsi test data, respectively, which are 14.4% and 17.9% higher than the baseline. This work can be considered a simple yet powerful baseline for further investigation into the cross-language tasks. This paper focuses on the task of cross-language relation extraction, which aims to identify the semantic relations holding between entities in the text. The goal of the task is to train classifiers for low-resource languages by means of the annotated data from high-resource languages. Related methods usually employ parallel data or Machine Translator (MT) to project annotated data from a source to a target language. However, the availability and the quality of parallel data and MT are big challenges for low-resource languages. In this paper, a novel transfer learning method is presented for this task. The key idea is to utilize a tree-based representation of data, which is highly informative for classifying semantic relations, and also shared among different languages. All the training and test data are shown using this representation. We propose to use the Universal Dependency (UD) parsing, which is a language-agnostic formalism for representation of syntactic structures. Equipping UD parse trees with multi-lingual word embeddings makes an ideal representation for the cross-language relation extraction task. We propose two deep networks to use this representation. The first one utilizes the Shortest Dependency Path of UD trees, while the second employs the UD-based positional embeddings. Experiments are performed using SemEval 2010-task 8 training data, whereas French and Farsi are the test languages. The results show 63.9% and 56.2% F 1 scores, for French and Farsi test data, respectively, which are 14.4% and 17.9% higher than the baseline. This work can be considered a simple yet powerful baseline for further investigation into the cross-language tasks. Relation extraction Elsevier Dependency context Elsevier Universal Dependency Parsing Elsevier Tree-based models Elsevier Faili, Heshaam oth Enthalten in Academic Press Karwowski, Maciej ELSEVIER Intensive schooling and cognitive ability: A case of Polish educational reform 2021 London (DE-627)ELV006450652 volume:71 year:2022 pages:0 https://doi.org/10.1016/j.csl.2021.101265 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U AR 71 2022 0 |
allfieldsGer |
10.1016/j.csl.2021.101265 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001511.pica (DE-627)ELV055418643 (ELSEVIER)S0885-2308(21)00071-1 DE-627 ger DE-627 rakwb eng 150 VZ Taghizadeh, Nasrin verfasserin aut Cross-lingual transfer learning for relation extraction using Universal Dependencies 2022transfer abstract nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier This paper focuses on the task of cross-language relation extraction, which aims to identify the semantic relations holding between entities in the text. The goal of the task is to train classifiers for low-resource languages by means of the annotated data from high-resource languages. Related methods usually employ parallel data or Machine Translator (MT) to project annotated data from a source to a target language. However, the availability and the quality of parallel data and MT are big challenges for low-resource languages. In this paper, a novel transfer learning method is presented for this task. The key idea is to utilize a tree-based representation of data, which is highly informative for classifying semantic relations, and also shared among different languages. All the training and test data are shown using this representation. We propose to use the Universal Dependency (UD) parsing, which is a language-agnostic formalism for representation of syntactic structures. Equipping UD parse trees with multi-lingual word embeddings makes an ideal representation for the cross-language relation extraction task. We propose two deep networks to use this representation. The first one utilizes the Shortest Dependency Path of UD trees, while the second employs the UD-based positional embeddings. Experiments are performed using SemEval 2010-task 8 training data, whereas French and Farsi are the test languages. The results show 63.9% and 56.2% F 1 scores, for French and Farsi test data, respectively, which are 14.4% and 17.9% higher than the baseline. This work can be considered a simple yet powerful baseline for further investigation into the cross-language tasks. This paper focuses on the task of cross-language relation extraction, which aims to identify the semantic relations holding between entities in the text. The goal of the task is to train classifiers for low-resource languages by means of the annotated data from high-resource languages. Related methods usually employ parallel data or Machine Translator (MT) to project annotated data from a source to a target language. However, the availability and the quality of parallel data and MT are big challenges for low-resource languages. In this paper, a novel transfer learning method is presented for this task. The key idea is to utilize a tree-based representation of data, which is highly informative for classifying semantic relations, and also shared among different languages. All the training and test data are shown using this representation. We propose to use the Universal Dependency (UD) parsing, which is a language-agnostic formalism for representation of syntactic structures. Equipping UD parse trees with multi-lingual word embeddings makes an ideal representation for the cross-language relation extraction task. We propose two deep networks to use this representation. The first one utilizes the Shortest Dependency Path of UD trees, while the second employs the UD-based positional embeddings. Experiments are performed using SemEval 2010-task 8 training data, whereas French and Farsi are the test languages. The results show 63.9% and 56.2% F 1 scores, for French and Farsi test data, respectively, which are 14.4% and 17.9% higher than the baseline. This work can be considered a simple yet powerful baseline for further investigation into the cross-language tasks. Relation extraction Elsevier Dependency context Elsevier Universal Dependency Parsing Elsevier Tree-based models Elsevier Faili, Heshaam oth Enthalten in Academic Press Karwowski, Maciej ELSEVIER Intensive schooling and cognitive ability: A case of Polish educational reform 2021 London (DE-627)ELV006450652 volume:71 year:2022 pages:0 https://doi.org/10.1016/j.csl.2021.101265 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U AR 71 2022 0 |
allfieldsSound |
10.1016/j.csl.2021.101265 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001511.pica (DE-627)ELV055418643 (ELSEVIER)S0885-2308(21)00071-1 DE-627 ger DE-627 rakwb eng 150 VZ Taghizadeh, Nasrin verfasserin aut Cross-lingual transfer learning for relation extraction using Universal Dependencies 2022transfer abstract nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier This paper focuses on the task of cross-language relation extraction, which aims to identify the semantic relations holding between entities in the text. The goal of the task is to train classifiers for low-resource languages by means of the annotated data from high-resource languages. Related methods usually employ parallel data or Machine Translator (MT) to project annotated data from a source to a target language. However, the availability and the quality of parallel data and MT are big challenges for low-resource languages. In this paper, a novel transfer learning method is presented for this task. The key idea is to utilize a tree-based representation of data, which is highly informative for classifying semantic relations, and also shared among different languages. All the training and test data are shown using this representation. We propose to use the Universal Dependency (UD) parsing, which is a language-agnostic formalism for representation of syntactic structures. Equipping UD parse trees with multi-lingual word embeddings makes an ideal representation for the cross-language relation extraction task. We propose two deep networks to use this representation. The first one utilizes the Shortest Dependency Path of UD trees, while the second employs the UD-based positional embeddings. Experiments are performed using SemEval 2010-task 8 training data, whereas French and Farsi are the test languages. The results show 63.9% and 56.2% F 1 scores, for French and Farsi test data, respectively, which are 14.4% and 17.9% higher than the baseline. This work can be considered a simple yet powerful baseline for further investigation into the cross-language tasks. This paper focuses on the task of cross-language relation extraction, which aims to identify the semantic relations holding between entities in the text. The goal of the task is to train classifiers for low-resource languages by means of the annotated data from high-resource languages. Related methods usually employ parallel data or Machine Translator (MT) to project annotated data from a source to a target language. However, the availability and the quality of parallel data and MT are big challenges for low-resource languages. In this paper, a novel transfer learning method is presented for this task. The key idea is to utilize a tree-based representation of data, which is highly informative for classifying semantic relations, and also shared among different languages. All the training and test data are shown using this representation. We propose to use the Universal Dependency (UD) parsing, which is a language-agnostic formalism for representation of syntactic structures. Equipping UD parse trees with multi-lingual word embeddings makes an ideal representation for the cross-language relation extraction task. We propose two deep networks to use this representation. The first one utilizes the Shortest Dependency Path of UD trees, while the second employs the UD-based positional embeddings. Experiments are performed using SemEval 2010-task 8 training data, whereas French and Farsi are the test languages. The results show 63.9% and 56.2% F 1 scores, for French and Farsi test data, respectively, which are 14.4% and 17.9% higher than the baseline. This work can be considered a simple yet powerful baseline for further investigation into the cross-language tasks. Relation extraction Elsevier Dependency context Elsevier Universal Dependency Parsing Elsevier Tree-based models Elsevier Faili, Heshaam oth Enthalten in Academic Press Karwowski, Maciej ELSEVIER Intensive schooling and cognitive ability: A case of Polish educational reform 2021 London (DE-627)ELV006450652 volume:71 year:2022 pages:0 https://doi.org/10.1016/j.csl.2021.101265 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U AR 71 2022 0 |
language |
English |
source |
Enthalten in Intensive schooling and cognitive ability: A case of Polish educational reform London volume:71 year:2022 pages:0 |
sourceStr |
Enthalten in Intensive schooling and cognitive ability: A case of Polish educational reform London volume:71 year:2022 pages:0 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Relation extraction Dependency context Universal Dependency Parsing Tree-based models |
dewey-raw |
150 |
isfreeaccess_bool |
false |
container_title |
Intensive schooling and cognitive ability: A case of Polish educational reform |
authorswithroles_txt_mv |
Taghizadeh, Nasrin @@aut@@ Faili, Heshaam @@oth@@ |
publishDateDaySort_date |
2022-01-01T00:00:00Z |
hierarchy_top_id |
ELV006450652 |
dewey-sort |
3150 |
id |
ELV055418643 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV055418643</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626041636.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">220105s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.csl.2021.101265</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001511.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV055418643</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0885-2308(21)00071-1</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">150</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Taghizadeh, Nasrin</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Cross-lingual transfer learning for relation extraction using Universal Dependencies</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022transfer abstract</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">This paper focuses on the task of cross-language relation extraction, which aims to identify the semantic relations holding between entities in the text. The goal of the task is to train classifiers for low-resource languages by means of the annotated data from high-resource languages. Related methods usually employ parallel data or Machine Translator (MT) to project annotated data from a source to a target language. However, the availability and the quality of parallel data and MT are big challenges for low-resource languages. In this paper, a novel transfer learning method is presented for this task. The key idea is to utilize a tree-based representation of data, which is highly informative for classifying semantic relations, and also shared among different languages. All the training and test data are shown using this representation. We propose to use the Universal Dependency (UD) parsing, which is a language-agnostic formalism for representation of syntactic structures. Equipping UD parse trees with multi-lingual word embeddings makes an ideal representation for the cross-language relation extraction task. We propose two deep networks to use this representation. The first one utilizes the Shortest Dependency Path of UD trees, while the second employs the UD-based positional embeddings. Experiments are performed using SemEval 2010-task 8 training data, whereas French and Farsi are the test languages. The results show 63.9% and 56.2% F 1 scores, for French and Farsi test data, respectively, which are 14.4% and 17.9% higher than the baseline. This work can be considered a simple yet powerful baseline for further investigation into the cross-language tasks.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">This paper focuses on the task of cross-language relation extraction, which aims to identify the semantic relations holding between entities in the text. The goal of the task is to train classifiers for low-resource languages by means of the annotated data from high-resource languages. Related methods usually employ parallel data or Machine Translator (MT) to project annotated data from a source to a target language. However, the availability and the quality of parallel data and MT are big challenges for low-resource languages. In this paper, a novel transfer learning method is presented for this task. The key idea is to utilize a tree-based representation of data, which is highly informative for classifying semantic relations, and also shared among different languages. All the training and test data are shown using this representation. We propose to use the Universal Dependency (UD) parsing, which is a language-agnostic formalism for representation of syntactic structures. Equipping UD parse trees with multi-lingual word embeddings makes an ideal representation for the cross-language relation extraction task. We propose two deep networks to use this representation. The first one utilizes the Shortest Dependency Path of UD trees, while the second employs the UD-based positional embeddings. Experiments are performed using SemEval 2010-task 8 training data, whereas French and Farsi are the test languages. The results show 63.9% and 56.2% F 1 scores, for French and Farsi test data, respectively, which are 14.4% and 17.9% higher than the baseline. This work can be considered a simple yet powerful baseline for further investigation into the cross-language tasks.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Relation extraction</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Dependency context</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Universal Dependency Parsing</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Tree-based models</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Faili, Heshaam</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Academic Press</subfield><subfield code="a">Karwowski, Maciej ELSEVIER</subfield><subfield code="t">Intensive schooling and cognitive ability: A case of Polish educational reform</subfield><subfield code="d">2021</subfield><subfield code="g">London</subfield><subfield code="w">(DE-627)ELV006450652</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:71</subfield><subfield code="g">year:2022</subfield><subfield code="g">pages:0</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.csl.2021.101265</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">71</subfield><subfield code="j">2022</subfield><subfield code="h">0</subfield></datafield></record></collection>
|
author |
Taghizadeh, Nasrin |
spellingShingle |
Taghizadeh, Nasrin ddc 150 Elsevier Relation extraction Elsevier Dependency context Elsevier Universal Dependency Parsing Elsevier Tree-based models Cross-lingual transfer learning for relation extraction using Universal Dependencies |
authorStr |
Taghizadeh, Nasrin |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)ELV006450652 |
format |
electronic Article |
dewey-ones |
150 - Psychology |
delete_txt_mv |
keep |
author_role |
aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
topic_title |
150 VZ Cross-lingual transfer learning for relation extraction using Universal Dependencies Relation extraction Elsevier Dependency context Elsevier Universal Dependency Parsing Elsevier Tree-based models Elsevier |
topic |
ddc 150 Elsevier Relation extraction Elsevier Dependency context Elsevier Universal Dependency Parsing Elsevier Tree-based models |
topic_unstemmed |
ddc 150 Elsevier Relation extraction Elsevier Dependency context Elsevier Universal Dependency Parsing Elsevier Tree-based models |
topic_browse |
ddc 150 Elsevier Relation extraction Elsevier Dependency context Elsevier Universal Dependency Parsing Elsevier Tree-based models |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
zu |
author2_variant |
h f hf |
hierarchy_parent_title |
Intensive schooling and cognitive ability: A case of Polish educational reform |
hierarchy_parent_id |
ELV006450652 |
dewey-tens |
150 - Psychology |
hierarchy_top_title |
Intensive schooling and cognitive ability: A case of Polish educational reform |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)ELV006450652 |
title |
Cross-lingual transfer learning for relation extraction using Universal Dependencies |
ctrlnum |
(DE-627)ELV055418643 (ELSEVIER)S0885-2308(21)00071-1 |
title_full |
Cross-lingual transfer learning for relation extraction using Universal Dependencies |
author_sort |
Taghizadeh, Nasrin |
journal |
Intensive schooling and cognitive ability: A case of Polish educational reform |
journalStr |
Intensive schooling and cognitive ability: A case of Polish educational reform |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
100 - Philosophy & psychology |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
zzz |
container_start_page |
0 |
author_browse |
Taghizadeh, Nasrin |
container_volume |
71 |
class |
150 VZ |
format_se |
Elektronische Aufsätze |
author-letter |
Taghizadeh, Nasrin |
doi_str_mv |
10.1016/j.csl.2021.101265 |
dewey-full |
150 |
title_sort |
cross-lingual transfer learning for relation extraction using universal dependencies |
title_auth |
Cross-lingual transfer learning for relation extraction using Universal Dependencies |
abstract |
This paper focuses on the task of cross-language relation extraction, which aims to identify the semantic relations holding between entities in the text. The goal of the task is to train classifiers for low-resource languages by means of the annotated data from high-resource languages. Related methods usually employ parallel data or Machine Translator (MT) to project annotated data from a source to a target language. However, the availability and the quality of parallel data and MT are big challenges for low-resource languages. In this paper, a novel transfer learning method is presented for this task. The key idea is to utilize a tree-based representation of data, which is highly informative for classifying semantic relations, and also shared among different languages. All the training and test data are shown using this representation. We propose to use the Universal Dependency (UD) parsing, which is a language-agnostic formalism for representation of syntactic structures. Equipping UD parse trees with multi-lingual word embeddings makes an ideal representation for the cross-language relation extraction task. We propose two deep networks to use this representation. The first one utilizes the Shortest Dependency Path of UD trees, while the second employs the UD-based positional embeddings. Experiments are performed using SemEval 2010-task 8 training data, whereas French and Farsi are the test languages. The results show 63.9% and 56.2% F 1 scores, for French and Farsi test data, respectively, which are 14.4% and 17.9% higher than the baseline. This work can be considered a simple yet powerful baseline for further investigation into the cross-language tasks. |
abstractGer |
This paper focuses on the task of cross-language relation extraction, which aims to identify the semantic relations holding between entities in the text. The goal of the task is to train classifiers for low-resource languages by means of the annotated data from high-resource languages. Related methods usually employ parallel data or Machine Translator (MT) to project annotated data from a source to a target language. However, the availability and the quality of parallel data and MT are big challenges for low-resource languages. In this paper, a novel transfer learning method is presented for this task. The key idea is to utilize a tree-based representation of data, which is highly informative for classifying semantic relations, and also shared among different languages. All the training and test data are shown using this representation. We propose to use the Universal Dependency (UD) parsing, which is a language-agnostic formalism for representation of syntactic structures. Equipping UD parse trees with multi-lingual word embeddings makes an ideal representation for the cross-language relation extraction task. We propose two deep networks to use this representation. The first one utilizes the Shortest Dependency Path of UD trees, while the second employs the UD-based positional embeddings. Experiments are performed using SemEval 2010-task 8 training data, whereas French and Farsi are the test languages. The results show 63.9% and 56.2% F 1 scores, for French and Farsi test data, respectively, which are 14.4% and 17.9% higher than the baseline. This work can be considered a simple yet powerful baseline for further investigation into the cross-language tasks. |
abstract_unstemmed |
This paper focuses on the task of cross-language relation extraction, which aims to identify the semantic relations holding between entities in the text. The goal of the task is to train classifiers for low-resource languages by means of the annotated data from high-resource languages. Related methods usually employ parallel data or Machine Translator (MT) to project annotated data from a source to a target language. However, the availability and the quality of parallel data and MT are big challenges for low-resource languages. In this paper, a novel transfer learning method is presented for this task. The key idea is to utilize a tree-based representation of data, which is highly informative for classifying semantic relations, and also shared among different languages. All the training and test data are shown using this representation. We propose to use the Universal Dependency (UD) parsing, which is a language-agnostic formalism for representation of syntactic structures. Equipping UD parse trees with multi-lingual word embeddings makes an ideal representation for the cross-language relation extraction task. We propose two deep networks to use this representation. The first one utilizes the Shortest Dependency Path of UD trees, while the second employs the UD-based positional embeddings. Experiments are performed using SemEval 2010-task 8 training data, whereas French and Farsi are the test languages. The results show 63.9% and 56.2% F 1 scores, for French and Farsi test data, respectively, which are 14.4% and 17.9% higher than the baseline. This work can be considered a simple yet powerful baseline for further investigation into the cross-language tasks. |
collection_details |
GBV_USEFLAG_U GBV_ELV SYSFLAG_U |
title_short |
Cross-lingual transfer learning for relation extraction using Universal Dependencies |
url |
https://doi.org/10.1016/j.csl.2021.101265 |
remote_bool |
true |
author2 |
Faili, Heshaam |
author2Str |
Faili, Heshaam |
ppnlink |
ELV006450652 |
mediatype_str_mv |
z |
isOA_txt |
false |
hochschulschrift_bool |
false |
author2_role |
oth |
doi_str |
10.1016/j.csl.2021.101265 |
up_date |
2024-07-06T17:29:55.875Z |
_version_ |
1803851656988721152 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV055418643</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626041636.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">220105s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.csl.2021.101265</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001511.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV055418643</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0885-2308(21)00071-1</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">150</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Taghizadeh, Nasrin</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Cross-lingual transfer learning for relation extraction using Universal Dependencies</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022transfer abstract</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">This paper focuses on the task of cross-language relation extraction, which aims to identify the semantic relations holding between entities in the text. The goal of the task is to train classifiers for low-resource languages by means of the annotated data from high-resource languages. Related methods usually employ parallel data or Machine Translator (MT) to project annotated data from a source to a target language. However, the availability and the quality of parallel data and MT are big challenges for low-resource languages. In this paper, a novel transfer learning method is presented for this task. The key idea is to utilize a tree-based representation of data, which is highly informative for classifying semantic relations, and also shared among different languages. All the training and test data are shown using this representation. We propose to use the Universal Dependency (UD) parsing, which is a language-agnostic formalism for representation of syntactic structures. Equipping UD parse trees with multi-lingual word embeddings makes an ideal representation for the cross-language relation extraction task. We propose two deep networks to use this representation. The first one utilizes the Shortest Dependency Path of UD trees, while the second employs the UD-based positional embeddings. Experiments are performed using SemEval 2010-task 8 training data, whereas French and Farsi are the test languages. The results show 63.9% and 56.2% F 1 scores, for French and Farsi test data, respectively, which are 14.4% and 17.9% higher than the baseline. This work can be considered a simple yet powerful baseline for further investigation into the cross-language tasks.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">This paper focuses on the task of cross-language relation extraction, which aims to identify the semantic relations holding between entities in the text. The goal of the task is to train classifiers for low-resource languages by means of the annotated data from high-resource languages. Related methods usually employ parallel data or Machine Translator (MT) to project annotated data from a source to a target language. However, the availability and the quality of parallel data and MT are big challenges for low-resource languages. In this paper, a novel transfer learning method is presented for this task. The key idea is to utilize a tree-based representation of data, which is highly informative for classifying semantic relations, and also shared among different languages. All the training and test data are shown using this representation. We propose to use the Universal Dependency (UD) parsing, which is a language-agnostic formalism for representation of syntactic structures. Equipping UD parse trees with multi-lingual word embeddings makes an ideal representation for the cross-language relation extraction task. We propose two deep networks to use this representation. The first one utilizes the Shortest Dependency Path of UD trees, while the second employs the UD-based positional embeddings. Experiments are performed using SemEval 2010-task 8 training data, whereas French and Farsi are the test languages. The results show 63.9% and 56.2% F 1 scores, for French and Farsi test data, respectively, which are 14.4% and 17.9% higher than the baseline. This work can be considered a simple yet powerful baseline for further investigation into the cross-language tasks.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Relation extraction</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Dependency context</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Universal Dependency Parsing</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Tree-based models</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Faili, Heshaam</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Academic Press</subfield><subfield code="a">Karwowski, Maciej ELSEVIER</subfield><subfield code="t">Intensive schooling and cognitive ability: A case of Polish educational reform</subfield><subfield code="d">2021</subfield><subfield code="g">London</subfield><subfield code="w">(DE-627)ELV006450652</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:71</subfield><subfield code="g">year:2022</subfield><subfield code="g">pages:0</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.csl.2021.101265</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">71</subfield><subfield code="j">2022</subfield><subfield code="h">0</subfield></datafield></record></collection>
|
score |
7.399722 |