RECURRENT NEURAL NETWORKS AND FINITE AUTOMATA
This article studies finite size networks that consist of interconnections of synchronously evolving processors. Each processor updates its state by applying an activation function to a linear combination of the previous states of all units. We prove that any function for which the left and right li...
Ausführliche Beschreibung
Autor*in: |
Siegelmann, Hava T. [verfasserIn] |
---|
Format: |
E-Artikel |
---|
Erschienen: |
Oxford, UK: Blackwell Publishing Ltd ; 1996 |
---|
Schlagwörter: |
---|
Umfang: |
Online-Ressource |
---|
Reproduktion: |
2007 ; Blackwell Publishing Journal Backfiles 1879-2005 |
---|---|
Übergeordnetes Werk: |
In: Computational intelligence - Oxford [u.a.] : Wiley-Blackwell, 1985, 12(1996), 4, Seite 0 |
Übergeordnetes Werk: |
volume:12 ; year:1996 ; number:4 ; pages:0 |
Links: |
---|
DOI / URN: |
10.1111/j.1467-8640.1996.tb00277.x |
---|
Katalog-ID: |
NLEJ238515044 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | NLEJ238515044 | ||
003 | DE-627 | ||
005 | 20210707063741.0 | ||
007 | cr uuu---uuuuu | ||
008 | 120417s1996 xx |||||o 00| ||und c | ||
024 | 7 | |a 10.1111/j.1467-8640.1996.tb00277.x |2 doi | |
035 | |a (DE-627)NLEJ238515044 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
100 | 1 | |a Siegelmann, Hava T. |e verfasserin |4 aut | |
245 | 1 | 0 | |a RECURRENT NEURAL NETWORKS AND FINITE AUTOMATA |
264 | 1 | |a Oxford, UK |b Blackwell Publishing Ltd |c 1996 | |
300 | |a Online-Ressource | ||
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a nicht spezifiziert |b z |2 rdamedia | ||
338 | |a nicht spezifiziert |b zu |2 rdacarrier | ||
520 | |a This article studies finite size networks that consist of interconnections of synchronously evolving processors. Each processor updates its state by applying an activation function to a linear combination of the previous states of all units. We prove that any function for which the left and right limits exist and are different can be applied to the neurons to yield a network which is at least as strong computationally as a finite automaton. We conclude that if this is the power required, one may choose any of the aforementioned neurons, according to the hardware available or the learning software preferred for the particular application. | ||
533 | |d 2007 |f Blackwell Publishing Journal Backfiles 1879-2005 |7 |2007|||||||||| | ||
650 | 4 | |a recurrent neural networks | |
773 | 0 | 8 | |i In |t Computational intelligence |d Oxford [u.a.] : Wiley-Blackwell, 1985 |g 12(1996), 4, Seite 0 |h Online-Ressource |w (DE-627)NLEJ243926685 |w (DE-600)2016539-0 |x 1467-8640 |7 nnns |
773 | 1 | 8 | |g volume:12 |g year:1996 |g number:4 |g pages:0 |
856 | 4 | 0 | |u http://dx.doi.org/10.1111/j.1467-8640.1996.tb00277.x |q text/html |x Verlag |z Deutschlandweit zugänglich |3 Volltext |
912 | |a GBV_USEFLAG_U | ||
912 | |a ZDB-1-DJB | ||
912 | |a GBV_NL_ARTICLE | ||
951 | |a AR | ||
952 | |d 12 |j 1996 |e 4 |h 0 |
author_variant |
h t s ht hts |
---|---|
matchkey_str |
article:14678640:1996----::eurnnuantokadii |
hierarchy_sort_str |
1996 |
publishDate |
1996 |
allfields |
10.1111/j.1467-8640.1996.tb00277.x doi (DE-627)NLEJ238515044 DE-627 ger DE-627 rakwb Siegelmann, Hava T. verfasserin aut RECURRENT NEURAL NETWORKS AND FINITE AUTOMATA Oxford, UK Blackwell Publishing Ltd 1996 Online-Ressource nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier This article studies finite size networks that consist of interconnections of synchronously evolving processors. Each processor updates its state by applying an activation function to a linear combination of the previous states of all units. We prove that any function for which the left and right limits exist and are different can be applied to the neurons to yield a network which is at least as strong computationally as a finite automaton. We conclude that if this is the power required, one may choose any of the aforementioned neurons, according to the hardware available or the learning software preferred for the particular application. 2007 Blackwell Publishing Journal Backfiles 1879-2005 |2007|||||||||| recurrent neural networks In Computational intelligence Oxford [u.a.] : Wiley-Blackwell, 1985 12(1996), 4, Seite 0 Online-Ressource (DE-627)NLEJ243926685 (DE-600)2016539-0 1467-8640 nnns volume:12 year:1996 number:4 pages:0 http://dx.doi.org/10.1111/j.1467-8640.1996.tb00277.x text/html Verlag Deutschlandweit zugänglich Volltext GBV_USEFLAG_U ZDB-1-DJB GBV_NL_ARTICLE AR 12 1996 4 0 |
spelling |
10.1111/j.1467-8640.1996.tb00277.x doi (DE-627)NLEJ238515044 DE-627 ger DE-627 rakwb Siegelmann, Hava T. verfasserin aut RECURRENT NEURAL NETWORKS AND FINITE AUTOMATA Oxford, UK Blackwell Publishing Ltd 1996 Online-Ressource nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier This article studies finite size networks that consist of interconnections of synchronously evolving processors. Each processor updates its state by applying an activation function to a linear combination of the previous states of all units. We prove that any function for which the left and right limits exist and are different can be applied to the neurons to yield a network which is at least as strong computationally as a finite automaton. We conclude that if this is the power required, one may choose any of the aforementioned neurons, according to the hardware available or the learning software preferred for the particular application. 2007 Blackwell Publishing Journal Backfiles 1879-2005 |2007|||||||||| recurrent neural networks In Computational intelligence Oxford [u.a.] : Wiley-Blackwell, 1985 12(1996), 4, Seite 0 Online-Ressource (DE-627)NLEJ243926685 (DE-600)2016539-0 1467-8640 nnns volume:12 year:1996 number:4 pages:0 http://dx.doi.org/10.1111/j.1467-8640.1996.tb00277.x text/html Verlag Deutschlandweit zugänglich Volltext GBV_USEFLAG_U ZDB-1-DJB GBV_NL_ARTICLE AR 12 1996 4 0 |
allfields_unstemmed |
10.1111/j.1467-8640.1996.tb00277.x doi (DE-627)NLEJ238515044 DE-627 ger DE-627 rakwb Siegelmann, Hava T. verfasserin aut RECURRENT NEURAL NETWORKS AND FINITE AUTOMATA Oxford, UK Blackwell Publishing Ltd 1996 Online-Ressource nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier This article studies finite size networks that consist of interconnections of synchronously evolving processors. Each processor updates its state by applying an activation function to a linear combination of the previous states of all units. We prove that any function for which the left and right limits exist and are different can be applied to the neurons to yield a network which is at least as strong computationally as a finite automaton. We conclude that if this is the power required, one may choose any of the aforementioned neurons, according to the hardware available or the learning software preferred for the particular application. 2007 Blackwell Publishing Journal Backfiles 1879-2005 |2007|||||||||| recurrent neural networks In Computational intelligence Oxford [u.a.] : Wiley-Blackwell, 1985 12(1996), 4, Seite 0 Online-Ressource (DE-627)NLEJ243926685 (DE-600)2016539-0 1467-8640 nnns volume:12 year:1996 number:4 pages:0 http://dx.doi.org/10.1111/j.1467-8640.1996.tb00277.x text/html Verlag Deutschlandweit zugänglich Volltext GBV_USEFLAG_U ZDB-1-DJB GBV_NL_ARTICLE AR 12 1996 4 0 |
allfieldsGer |
10.1111/j.1467-8640.1996.tb00277.x doi (DE-627)NLEJ238515044 DE-627 ger DE-627 rakwb Siegelmann, Hava T. verfasserin aut RECURRENT NEURAL NETWORKS AND FINITE AUTOMATA Oxford, UK Blackwell Publishing Ltd 1996 Online-Ressource nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier This article studies finite size networks that consist of interconnections of synchronously evolving processors. Each processor updates its state by applying an activation function to a linear combination of the previous states of all units. We prove that any function for which the left and right limits exist and are different can be applied to the neurons to yield a network which is at least as strong computationally as a finite automaton. We conclude that if this is the power required, one may choose any of the aforementioned neurons, according to the hardware available or the learning software preferred for the particular application. 2007 Blackwell Publishing Journal Backfiles 1879-2005 |2007|||||||||| recurrent neural networks In Computational intelligence Oxford [u.a.] : Wiley-Blackwell, 1985 12(1996), 4, Seite 0 Online-Ressource (DE-627)NLEJ243926685 (DE-600)2016539-0 1467-8640 nnns volume:12 year:1996 number:4 pages:0 http://dx.doi.org/10.1111/j.1467-8640.1996.tb00277.x text/html Verlag Deutschlandweit zugänglich Volltext GBV_USEFLAG_U ZDB-1-DJB GBV_NL_ARTICLE AR 12 1996 4 0 |
allfieldsSound |
10.1111/j.1467-8640.1996.tb00277.x doi (DE-627)NLEJ238515044 DE-627 ger DE-627 rakwb Siegelmann, Hava T. verfasserin aut RECURRENT NEURAL NETWORKS AND FINITE AUTOMATA Oxford, UK Blackwell Publishing Ltd 1996 Online-Ressource nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier This article studies finite size networks that consist of interconnections of synchronously evolving processors. Each processor updates its state by applying an activation function to a linear combination of the previous states of all units. We prove that any function for which the left and right limits exist and are different can be applied to the neurons to yield a network which is at least as strong computationally as a finite automaton. We conclude that if this is the power required, one may choose any of the aforementioned neurons, according to the hardware available or the learning software preferred for the particular application. 2007 Blackwell Publishing Journal Backfiles 1879-2005 |2007|||||||||| recurrent neural networks In Computational intelligence Oxford [u.a.] : Wiley-Blackwell, 1985 12(1996), 4, Seite 0 Online-Ressource (DE-627)NLEJ243926685 (DE-600)2016539-0 1467-8640 nnns volume:12 year:1996 number:4 pages:0 http://dx.doi.org/10.1111/j.1467-8640.1996.tb00277.x text/html Verlag Deutschlandweit zugänglich Volltext GBV_USEFLAG_U ZDB-1-DJB GBV_NL_ARTICLE AR 12 1996 4 0 |
source |
In Computational intelligence 12(1996), 4, Seite 0 volume:12 year:1996 number:4 pages:0 |
sourceStr |
In Computational intelligence 12(1996), 4, Seite 0 volume:12 year:1996 number:4 pages:0 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
recurrent neural networks |
isfreeaccess_bool |
false |
container_title |
Computational intelligence |
authorswithroles_txt_mv |
Siegelmann, Hava T. @@aut@@ |
publishDateDaySort_date |
1996-01-01T00:00:00Z |
hierarchy_top_id |
NLEJ243926685 |
id |
NLEJ238515044 |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">NLEJ238515044</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20210707063741.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">120417s1996 xx |||||o 00| ||und c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1111/j.1467-8640.1996.tb00277.x</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)NLEJ238515044</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Siegelmann, Hava T.</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">RECURRENT NEURAL NETWORKS AND FINITE AUTOMATA</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Oxford, UK</subfield><subfield code="b">Blackwell Publishing Ltd</subfield><subfield code="c">1996</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">This article studies finite size networks that consist of interconnections of synchronously evolving processors. Each processor updates its state by applying an activation function to a linear combination of the previous states of all units. We prove that any function for which the left and right limits exist and are different can be applied to the neurons to yield a network which is at least as strong computationally as a finite automaton. We conclude that if this is the power required, one may choose any of the aforementioned neurons, according to the hardware available or the learning software preferred for the particular application.</subfield></datafield><datafield tag="533" ind1=" " ind2=" "><subfield code="d">2007</subfield><subfield code="f">Blackwell Publishing Journal Backfiles 1879-2005</subfield><subfield code="7">|2007||||||||||</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">recurrent neural networks</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Computational intelligence</subfield><subfield code="d">Oxford [u.a.] : Wiley-Blackwell, 1985</subfield><subfield code="g">12(1996), 4, Seite 0</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)NLEJ243926685</subfield><subfield code="w">(DE-600)2016539-0</subfield><subfield code="x">1467-8640</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:12</subfield><subfield code="g">year:1996</subfield><subfield code="g">number:4</subfield><subfield code="g">pages:0</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">http://dx.doi.org/10.1111/j.1467-8640.1996.tb00277.x</subfield><subfield code="q">text/html</subfield><subfield code="x">Verlag</subfield><subfield code="z">Deutschlandweit zugänglich</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-1-DJB</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_NL_ARTICLE</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">12</subfield><subfield code="j">1996</subfield><subfield code="e">4</subfield><subfield code="h">0</subfield></datafield></record></collection>
|
series2 |
Blackwell Publishing Journal Backfiles 1879-2005 |
author |
Siegelmann, Hava T. |
spellingShingle |
Siegelmann, Hava T. misc recurrent neural networks RECURRENT NEURAL NETWORKS AND FINITE AUTOMATA |
authorStr |
Siegelmann, Hava T. |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)NLEJ243926685 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut |
collection |
NL |
publishPlace |
Oxford, UK |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
1467-8640 |
topic_title |
RECURRENT NEURAL NETWORKS AND FINITE AUTOMATA recurrent neural networks |
publisher |
Blackwell Publishing Ltd |
publisherStr |
Blackwell Publishing Ltd |
topic |
misc recurrent neural networks |
topic_unstemmed |
misc recurrent neural networks |
topic_browse |
misc recurrent neural networks |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
zu |
hierarchy_parent_title |
Computational intelligence |
hierarchy_parent_id |
NLEJ243926685 |
hierarchy_top_title |
Computational intelligence |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)NLEJ243926685 (DE-600)2016539-0 |
title |
RECURRENT NEURAL NETWORKS AND FINITE AUTOMATA |
ctrlnum |
(DE-627)NLEJ238515044 |
title_full |
RECURRENT NEURAL NETWORKS AND FINITE AUTOMATA |
author_sort |
Siegelmann, Hava T. |
journal |
Computational intelligence |
journalStr |
Computational intelligence |
isOA_bool |
false |
recordtype |
marc |
publishDateSort |
1996 |
contenttype_str_mv |
zzz |
container_start_page |
0 |
author_browse |
Siegelmann, Hava T. |
container_volume |
12 |
physical |
Online-Ressource |
format_se |
Elektronische Aufsätze |
author-letter |
Siegelmann, Hava T. |
doi_str_mv |
10.1111/j.1467-8640.1996.tb00277.x |
title_sort |
recurrent neural networks and finite automata |
title_auth |
RECURRENT NEURAL NETWORKS AND FINITE AUTOMATA |
abstract |
This article studies finite size networks that consist of interconnections of synchronously evolving processors. Each processor updates its state by applying an activation function to a linear combination of the previous states of all units. We prove that any function for which the left and right limits exist and are different can be applied to the neurons to yield a network which is at least as strong computationally as a finite automaton. We conclude that if this is the power required, one may choose any of the aforementioned neurons, according to the hardware available or the learning software preferred for the particular application. |
abstractGer |
This article studies finite size networks that consist of interconnections of synchronously evolving processors. Each processor updates its state by applying an activation function to a linear combination of the previous states of all units. We prove that any function for which the left and right limits exist and are different can be applied to the neurons to yield a network which is at least as strong computationally as a finite automaton. We conclude that if this is the power required, one may choose any of the aforementioned neurons, according to the hardware available or the learning software preferred for the particular application. |
abstract_unstemmed |
This article studies finite size networks that consist of interconnections of synchronously evolving processors. Each processor updates its state by applying an activation function to a linear combination of the previous states of all units. We prove that any function for which the left and right limits exist and are different can be applied to the neurons to yield a network which is at least as strong computationally as a finite automaton. We conclude that if this is the power required, one may choose any of the aforementioned neurons, according to the hardware available or the learning software preferred for the particular application. |
collection_details |
GBV_USEFLAG_U ZDB-1-DJB GBV_NL_ARTICLE |
container_issue |
4 |
title_short |
RECURRENT NEURAL NETWORKS AND FINITE AUTOMATA |
url |
http://dx.doi.org/10.1111/j.1467-8640.1996.tb00277.x |
remote_bool |
true |
ppnlink |
NLEJ243926685 |
mediatype_str_mv |
z |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1111/j.1467-8640.1996.tb00277.x |
up_date |
2024-07-06T05:19:27.141Z |
_version_ |
1803805699228041216 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">NLEJ238515044</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20210707063741.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">120417s1996 xx |||||o 00| ||und c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1111/j.1467-8640.1996.tb00277.x</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)NLEJ238515044</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Siegelmann, Hava T.</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">RECURRENT NEURAL NETWORKS AND FINITE AUTOMATA</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Oxford, UK</subfield><subfield code="b">Blackwell Publishing Ltd</subfield><subfield code="c">1996</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">This article studies finite size networks that consist of interconnections of synchronously evolving processors. Each processor updates its state by applying an activation function to a linear combination of the previous states of all units. We prove that any function for which the left and right limits exist and are different can be applied to the neurons to yield a network which is at least as strong computationally as a finite automaton. We conclude that if this is the power required, one may choose any of the aforementioned neurons, according to the hardware available or the learning software preferred for the particular application.</subfield></datafield><datafield tag="533" ind1=" " ind2=" "><subfield code="d">2007</subfield><subfield code="f">Blackwell Publishing Journal Backfiles 1879-2005</subfield><subfield code="7">|2007||||||||||</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">recurrent neural networks</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">Computational intelligence</subfield><subfield code="d">Oxford [u.a.] : Wiley-Blackwell, 1985</subfield><subfield code="g">12(1996), 4, Seite 0</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)NLEJ243926685</subfield><subfield code="w">(DE-600)2016539-0</subfield><subfield code="x">1467-8640</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:12</subfield><subfield code="g">year:1996</subfield><subfield code="g">number:4</subfield><subfield code="g">pages:0</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">http://dx.doi.org/10.1111/j.1467-8640.1996.tb00277.x</subfield><subfield code="q">text/html</subfield><subfield code="x">Verlag</subfield><subfield code="z">Deutschlandweit zugänglich</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-1-DJB</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_NL_ARTICLE</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">12</subfield><subfield code="j">1996</subfield><subfield code="e">4</subfield><subfield code="h">0</subfield></datafield></record></collection>
|
score |
7.399867 |