A framework for assessing the peer review duration of journals: case study in computer science
Abstract In various fields, scientific article publication is a measure of productivity and in many occasions it is used as a critical factor for evaluating researchers. Therefore, a lot of time is dedicated to writing articles that are then submitted for publication in journals. Nevertheless, the p...
Ausführliche Beschreibung
Autor*in: |
Bilalli, Besim [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2020 |
---|
Schlagwörter: |
---|
Anmerkung: |
© Akadémiai Kiadó, Budapest, Hungary 2020 |
---|
Übergeordnetes Werk: |
Enthalten in: Scientometrics - Springer International Publishing, 1978, 126(2020), 1 vom: 05. Nov., Seite 545-563 |
---|---|
Übergeordnetes Werk: |
volume:126 ; year:2020 ; number:1 ; day:05 ; month:11 ; pages:545-563 |
Links: |
---|
DOI / URN: |
10.1007/s11192-020-03742-9 |
---|
Katalog-ID: |
OLC212295986X |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | OLC212295986X | ||
003 | DE-627 | ||
005 | 20230505070513.0 | ||
007 | tu | ||
008 | 230505s2020 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s11192-020-03742-9 |2 doi | |
035 | |a (DE-627)OLC212295986X | ||
035 | |a (DE-He213)s11192-020-03742-9-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 050 |a 370 |q VZ |
084 | |a 11 |2 ssgn | ||
100 | 1 | |a Bilalli, Besim |e verfasserin |0 (orcid)0000-0002-0575-2389 |4 aut | |
245 | 1 | 0 | |a A framework for assessing the peer review duration of journals: case study in computer science |
264 | 1 | |c 2020 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © Akadémiai Kiadó, Budapest, Hungary 2020 | ||
520 | |a Abstract In various fields, scientific article publication is a measure of productivity and in many occasions it is used as a critical factor for evaluating researchers. Therefore, a lot of time is dedicated to writing articles that are then submitted for publication in journals. Nevertheless, the publication process in general and the review process in particular tend to be rather slow. This is the case for instance of computer science (CS) journals. Moreover, the process typically lacks in transparency, where information about the duration of the review process is at best provided in an aggregated manner, if made available at all. In this paper, we develop a framework as a step towards bringing more reliable data with respect to review duration. Based on this framework, we implement a tool—journal response time (JRT), that allows for automatically extracting the review process data and helps researchers to find the average response times of journals, which can be used to study the duration of CS journals’ peer review process. The information is extracted as metadata from the published articles, when available. This study reveals that the response times publicly provided by publishers differ from the actual values obtained by JRT (e.g., for ten selected journals the average duration reported by publishers deviates by more than 500% from the actual average value calculated from the data inside the articles), which we suspect could be from the fact that, when calculating the aggregated values, publishers consider the review time of rejected articles too (including quick desk-rejections that do not require reviewers). | ||
650 | 4 | |a Peer review process | |
650 | 4 | |a Review process duration | |
650 | 4 | |a Review process quality | |
700 | 1 | |a Munir, Rana Faisal |4 aut | |
700 | 1 | |a Abelló, Alberto |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Scientometrics |d Springer International Publishing, 1978 |g 126(2020), 1 vom: 05. Nov., Seite 545-563 |w (DE-627)13005352X |w (DE-600)435652-4 |w (DE-576)015591697 |x 0138-9130 |7 nnns |
773 | 1 | 8 | |g volume:126 |g year:2020 |g number:1 |g day:05 |g month:11 |g pages:545-563 |
856 | 4 | 1 | |u https://doi.org/10.1007/s11192-020-03742-9 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-BUB | ||
912 | |a SSG-OLC-HSW | ||
912 | |a SSG-OPC-BBI | ||
912 | |a GBV_ILN_4012 | ||
951 | |a AR | ||
952 | |d 126 |j 2020 |e 1 |b 05 |c 11 |h 545-563 |
author_variant |
b b bb r f m rf rfm a a aa |
---|---|
matchkey_str |
article:01389130:2020----::faeokoassighpervedrtoojunlcs |
hierarchy_sort_str |
2020 |
publishDate |
2020 |
allfields |
10.1007/s11192-020-03742-9 doi (DE-627)OLC212295986X (DE-He213)s11192-020-03742-9-p DE-627 ger DE-627 rakwb eng 050 370 VZ 11 ssgn Bilalli, Besim verfasserin (orcid)0000-0002-0575-2389 aut A framework for assessing the peer review duration of journals: case study in computer science 2020 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Akadémiai Kiadó, Budapest, Hungary 2020 Abstract In various fields, scientific article publication is a measure of productivity and in many occasions it is used as a critical factor for evaluating researchers. Therefore, a lot of time is dedicated to writing articles that are then submitted for publication in journals. Nevertheless, the publication process in general and the review process in particular tend to be rather slow. This is the case for instance of computer science (CS) journals. Moreover, the process typically lacks in transparency, where information about the duration of the review process is at best provided in an aggregated manner, if made available at all. In this paper, we develop a framework as a step towards bringing more reliable data with respect to review duration. Based on this framework, we implement a tool—journal response time (JRT), that allows for automatically extracting the review process data and helps researchers to find the average response times of journals, which can be used to study the duration of CS journals’ peer review process. The information is extracted as metadata from the published articles, when available. This study reveals that the response times publicly provided by publishers differ from the actual values obtained by JRT (e.g., for ten selected journals the average duration reported by publishers deviates by more than 500% from the actual average value calculated from the data inside the articles), which we suspect could be from the fact that, when calculating the aggregated values, publishers consider the review time of rejected articles too (including quick desk-rejections that do not require reviewers). Peer review process Review process duration Review process quality Munir, Rana Faisal aut Abelló, Alberto aut Enthalten in Scientometrics Springer International Publishing, 1978 126(2020), 1 vom: 05. Nov., Seite 545-563 (DE-627)13005352X (DE-600)435652-4 (DE-576)015591697 0138-9130 nnns volume:126 year:2020 number:1 day:05 month:11 pages:545-563 https://doi.org/10.1007/s11192-020-03742-9 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-BUB SSG-OLC-HSW SSG-OPC-BBI GBV_ILN_4012 AR 126 2020 1 05 11 545-563 |
spelling |
10.1007/s11192-020-03742-9 doi (DE-627)OLC212295986X (DE-He213)s11192-020-03742-9-p DE-627 ger DE-627 rakwb eng 050 370 VZ 11 ssgn Bilalli, Besim verfasserin (orcid)0000-0002-0575-2389 aut A framework for assessing the peer review duration of journals: case study in computer science 2020 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Akadémiai Kiadó, Budapest, Hungary 2020 Abstract In various fields, scientific article publication is a measure of productivity and in many occasions it is used as a critical factor for evaluating researchers. Therefore, a lot of time is dedicated to writing articles that are then submitted for publication in journals. Nevertheless, the publication process in general and the review process in particular tend to be rather slow. This is the case for instance of computer science (CS) journals. Moreover, the process typically lacks in transparency, where information about the duration of the review process is at best provided in an aggregated manner, if made available at all. In this paper, we develop a framework as a step towards bringing more reliable data with respect to review duration. Based on this framework, we implement a tool—journal response time (JRT), that allows for automatically extracting the review process data and helps researchers to find the average response times of journals, which can be used to study the duration of CS journals’ peer review process. The information is extracted as metadata from the published articles, when available. This study reveals that the response times publicly provided by publishers differ from the actual values obtained by JRT (e.g., for ten selected journals the average duration reported by publishers deviates by more than 500% from the actual average value calculated from the data inside the articles), which we suspect could be from the fact that, when calculating the aggregated values, publishers consider the review time of rejected articles too (including quick desk-rejections that do not require reviewers). Peer review process Review process duration Review process quality Munir, Rana Faisal aut Abelló, Alberto aut Enthalten in Scientometrics Springer International Publishing, 1978 126(2020), 1 vom: 05. Nov., Seite 545-563 (DE-627)13005352X (DE-600)435652-4 (DE-576)015591697 0138-9130 nnns volume:126 year:2020 number:1 day:05 month:11 pages:545-563 https://doi.org/10.1007/s11192-020-03742-9 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-BUB SSG-OLC-HSW SSG-OPC-BBI GBV_ILN_4012 AR 126 2020 1 05 11 545-563 |
allfields_unstemmed |
10.1007/s11192-020-03742-9 doi (DE-627)OLC212295986X (DE-He213)s11192-020-03742-9-p DE-627 ger DE-627 rakwb eng 050 370 VZ 11 ssgn Bilalli, Besim verfasserin (orcid)0000-0002-0575-2389 aut A framework for assessing the peer review duration of journals: case study in computer science 2020 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Akadémiai Kiadó, Budapest, Hungary 2020 Abstract In various fields, scientific article publication is a measure of productivity and in many occasions it is used as a critical factor for evaluating researchers. Therefore, a lot of time is dedicated to writing articles that are then submitted for publication in journals. Nevertheless, the publication process in general and the review process in particular tend to be rather slow. This is the case for instance of computer science (CS) journals. Moreover, the process typically lacks in transparency, where information about the duration of the review process is at best provided in an aggregated manner, if made available at all. In this paper, we develop a framework as a step towards bringing more reliable data with respect to review duration. Based on this framework, we implement a tool—journal response time (JRT), that allows for automatically extracting the review process data and helps researchers to find the average response times of journals, which can be used to study the duration of CS journals’ peer review process. The information is extracted as metadata from the published articles, when available. This study reveals that the response times publicly provided by publishers differ from the actual values obtained by JRT (e.g., for ten selected journals the average duration reported by publishers deviates by more than 500% from the actual average value calculated from the data inside the articles), which we suspect could be from the fact that, when calculating the aggregated values, publishers consider the review time of rejected articles too (including quick desk-rejections that do not require reviewers). Peer review process Review process duration Review process quality Munir, Rana Faisal aut Abelló, Alberto aut Enthalten in Scientometrics Springer International Publishing, 1978 126(2020), 1 vom: 05. Nov., Seite 545-563 (DE-627)13005352X (DE-600)435652-4 (DE-576)015591697 0138-9130 nnns volume:126 year:2020 number:1 day:05 month:11 pages:545-563 https://doi.org/10.1007/s11192-020-03742-9 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-BUB SSG-OLC-HSW SSG-OPC-BBI GBV_ILN_4012 AR 126 2020 1 05 11 545-563 |
allfieldsGer |
10.1007/s11192-020-03742-9 doi (DE-627)OLC212295986X (DE-He213)s11192-020-03742-9-p DE-627 ger DE-627 rakwb eng 050 370 VZ 11 ssgn Bilalli, Besim verfasserin (orcid)0000-0002-0575-2389 aut A framework for assessing the peer review duration of journals: case study in computer science 2020 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Akadémiai Kiadó, Budapest, Hungary 2020 Abstract In various fields, scientific article publication is a measure of productivity and in many occasions it is used as a critical factor for evaluating researchers. Therefore, a lot of time is dedicated to writing articles that are then submitted for publication in journals. Nevertheless, the publication process in general and the review process in particular tend to be rather slow. This is the case for instance of computer science (CS) journals. Moreover, the process typically lacks in transparency, where information about the duration of the review process is at best provided in an aggregated manner, if made available at all. In this paper, we develop a framework as a step towards bringing more reliable data with respect to review duration. Based on this framework, we implement a tool—journal response time (JRT), that allows for automatically extracting the review process data and helps researchers to find the average response times of journals, which can be used to study the duration of CS journals’ peer review process. The information is extracted as metadata from the published articles, when available. This study reveals that the response times publicly provided by publishers differ from the actual values obtained by JRT (e.g., for ten selected journals the average duration reported by publishers deviates by more than 500% from the actual average value calculated from the data inside the articles), which we suspect could be from the fact that, when calculating the aggregated values, publishers consider the review time of rejected articles too (including quick desk-rejections that do not require reviewers). Peer review process Review process duration Review process quality Munir, Rana Faisal aut Abelló, Alberto aut Enthalten in Scientometrics Springer International Publishing, 1978 126(2020), 1 vom: 05. Nov., Seite 545-563 (DE-627)13005352X (DE-600)435652-4 (DE-576)015591697 0138-9130 nnns volume:126 year:2020 number:1 day:05 month:11 pages:545-563 https://doi.org/10.1007/s11192-020-03742-9 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-BUB SSG-OLC-HSW SSG-OPC-BBI GBV_ILN_4012 AR 126 2020 1 05 11 545-563 |
allfieldsSound |
10.1007/s11192-020-03742-9 doi (DE-627)OLC212295986X (DE-He213)s11192-020-03742-9-p DE-627 ger DE-627 rakwb eng 050 370 VZ 11 ssgn Bilalli, Besim verfasserin (orcid)0000-0002-0575-2389 aut A framework for assessing the peer review duration of journals: case study in computer science 2020 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Akadémiai Kiadó, Budapest, Hungary 2020 Abstract In various fields, scientific article publication is a measure of productivity and in many occasions it is used as a critical factor for evaluating researchers. Therefore, a lot of time is dedicated to writing articles that are then submitted for publication in journals. Nevertheless, the publication process in general and the review process in particular tend to be rather slow. This is the case for instance of computer science (CS) journals. Moreover, the process typically lacks in transparency, where information about the duration of the review process is at best provided in an aggregated manner, if made available at all. In this paper, we develop a framework as a step towards bringing more reliable data with respect to review duration. Based on this framework, we implement a tool—journal response time (JRT), that allows for automatically extracting the review process data and helps researchers to find the average response times of journals, which can be used to study the duration of CS journals’ peer review process. The information is extracted as metadata from the published articles, when available. This study reveals that the response times publicly provided by publishers differ from the actual values obtained by JRT (e.g., for ten selected journals the average duration reported by publishers deviates by more than 500% from the actual average value calculated from the data inside the articles), which we suspect could be from the fact that, when calculating the aggregated values, publishers consider the review time of rejected articles too (including quick desk-rejections that do not require reviewers). Peer review process Review process duration Review process quality Munir, Rana Faisal aut Abelló, Alberto aut Enthalten in Scientometrics Springer International Publishing, 1978 126(2020), 1 vom: 05. Nov., Seite 545-563 (DE-627)13005352X (DE-600)435652-4 (DE-576)015591697 0138-9130 nnns volume:126 year:2020 number:1 day:05 month:11 pages:545-563 https://doi.org/10.1007/s11192-020-03742-9 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-BUB SSG-OLC-HSW SSG-OPC-BBI GBV_ILN_4012 AR 126 2020 1 05 11 545-563 |
language |
English |
source |
Enthalten in Scientometrics 126(2020), 1 vom: 05. Nov., Seite 545-563 volume:126 year:2020 number:1 day:05 month:11 pages:545-563 |
sourceStr |
Enthalten in Scientometrics 126(2020), 1 vom: 05. Nov., Seite 545-563 volume:126 year:2020 number:1 day:05 month:11 pages:545-563 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Peer review process Review process duration Review process quality |
dewey-raw |
050 |
isfreeaccess_bool |
false |
container_title |
Scientometrics |
authorswithroles_txt_mv |
Bilalli, Besim @@aut@@ Munir, Rana Faisal @@aut@@ Abelló, Alberto @@aut@@ |
publishDateDaySort_date |
2020-11-05T00:00:00Z |
hierarchy_top_id |
13005352X |
dewey-sort |
250 |
id |
OLC212295986X |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">OLC212295986X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230505070513.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">230505s2020 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11192-020-03742-9</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC212295986X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11192-020-03742-9-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">050</subfield><subfield code="a">370</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">11</subfield><subfield code="2">ssgn</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Bilalli, Besim</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-0575-2389</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">A framework for assessing the peer review duration of journals: case study in computer science</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Akadémiai Kiadó, Budapest, Hungary 2020</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract In various fields, scientific article publication is a measure of productivity and in many occasions it is used as a critical factor for evaluating researchers. Therefore, a lot of time is dedicated to writing articles that are then submitted for publication in journals. Nevertheless, the publication process in general and the review process in particular tend to be rather slow. This is the case for instance of computer science (CS) journals. Moreover, the process typically lacks in transparency, where information about the duration of the review process is at best provided in an aggregated manner, if made available at all. In this paper, we develop a framework as a step towards bringing more reliable data with respect to review duration. Based on this framework, we implement a tool—journal response time (JRT), that allows for automatically extracting the review process data and helps researchers to find the average response times of journals, which can be used to study the duration of CS journals’ peer review process. The information is extracted as metadata from the published articles, when available. This study reveals that the response times publicly provided by publishers differ from the actual values obtained by JRT (e.g., for ten selected journals the average duration reported by publishers deviates by more than 500% from the actual average value calculated from the data inside the articles), which we suspect could be from the fact that, when calculating the aggregated values, publishers consider the review time of rejected articles too (including quick desk-rejections that do not require reviewers).</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Peer review process</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Review process duration</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Review process quality</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Munir, Rana Faisal</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Abelló, Alberto</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Scientometrics</subfield><subfield code="d">Springer International Publishing, 1978</subfield><subfield code="g">126(2020), 1 vom: 05. Nov., Seite 545-563</subfield><subfield code="w">(DE-627)13005352X</subfield><subfield code="w">(DE-600)435652-4</subfield><subfield code="w">(DE-576)015591697</subfield><subfield code="x">0138-9130</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:126</subfield><subfield code="g">year:2020</subfield><subfield code="g">number:1</subfield><subfield code="g">day:05</subfield><subfield code="g">month:11</subfield><subfield code="g">pages:545-563</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11192-020-03742-9</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-BUB</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-HSW</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OPC-BBI</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">126</subfield><subfield code="j">2020</subfield><subfield code="e">1</subfield><subfield code="b">05</subfield><subfield code="c">11</subfield><subfield code="h">545-563</subfield></datafield></record></collection>
|
author |
Bilalli, Besim |
spellingShingle |
Bilalli, Besim ddc 050 ssgn 11 misc Peer review process misc Review process duration misc Review process quality A framework for assessing the peer review duration of journals: case study in computer science |
authorStr |
Bilalli, Besim |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)13005352X |
format |
Article |
dewey-ones |
050 - General serial publications 370 - Education |
delete_txt_mv |
keep |
author_role |
aut aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
0138-9130 |
topic_title |
050 370 VZ 11 ssgn A framework for assessing the peer review duration of journals: case study in computer science Peer review process Review process duration Review process quality |
topic |
ddc 050 ssgn 11 misc Peer review process misc Review process duration misc Review process quality |
topic_unstemmed |
ddc 050 ssgn 11 misc Peer review process misc Review process duration misc Review process quality |
topic_browse |
ddc 050 ssgn 11 misc Peer review process misc Review process duration misc Review process quality |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
Scientometrics |
hierarchy_parent_id |
13005352X |
dewey-tens |
050 - Magazines, journals & serials 370 - Education |
hierarchy_top_title |
Scientometrics |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)13005352X (DE-600)435652-4 (DE-576)015591697 |
title |
A framework for assessing the peer review duration of journals: case study in computer science |
ctrlnum |
(DE-627)OLC212295986X (DE-He213)s11192-020-03742-9-p |
title_full |
A framework for assessing the peer review duration of journals: case study in computer science |
author_sort |
Bilalli, Besim |
journal |
Scientometrics |
journalStr |
Scientometrics |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works 300 - Social sciences |
recordtype |
marc |
publishDateSort |
2020 |
contenttype_str_mv |
txt |
container_start_page |
545 |
author_browse |
Bilalli, Besim Munir, Rana Faisal Abelló, Alberto |
container_volume |
126 |
class |
050 370 VZ 11 ssgn |
format_se |
Aufsätze |
author-letter |
Bilalli, Besim |
doi_str_mv |
10.1007/s11192-020-03742-9 |
normlink |
(ORCID)0000-0002-0575-2389 |
normlink_prefix_str_mv |
(orcid)0000-0002-0575-2389 |
dewey-full |
050 370 |
title_sort |
a framework for assessing the peer review duration of journals: case study in computer science |
title_auth |
A framework for assessing the peer review duration of journals: case study in computer science |
abstract |
Abstract In various fields, scientific article publication is a measure of productivity and in many occasions it is used as a critical factor for evaluating researchers. Therefore, a lot of time is dedicated to writing articles that are then submitted for publication in journals. Nevertheless, the publication process in general and the review process in particular tend to be rather slow. This is the case for instance of computer science (CS) journals. Moreover, the process typically lacks in transparency, where information about the duration of the review process is at best provided in an aggregated manner, if made available at all. In this paper, we develop a framework as a step towards bringing more reliable data with respect to review duration. Based on this framework, we implement a tool—journal response time (JRT), that allows for automatically extracting the review process data and helps researchers to find the average response times of journals, which can be used to study the duration of CS journals’ peer review process. The information is extracted as metadata from the published articles, when available. This study reveals that the response times publicly provided by publishers differ from the actual values obtained by JRT (e.g., for ten selected journals the average duration reported by publishers deviates by more than 500% from the actual average value calculated from the data inside the articles), which we suspect could be from the fact that, when calculating the aggregated values, publishers consider the review time of rejected articles too (including quick desk-rejections that do not require reviewers). © Akadémiai Kiadó, Budapest, Hungary 2020 |
abstractGer |
Abstract In various fields, scientific article publication is a measure of productivity and in many occasions it is used as a critical factor for evaluating researchers. Therefore, a lot of time is dedicated to writing articles that are then submitted for publication in journals. Nevertheless, the publication process in general and the review process in particular tend to be rather slow. This is the case for instance of computer science (CS) journals. Moreover, the process typically lacks in transparency, where information about the duration of the review process is at best provided in an aggregated manner, if made available at all. In this paper, we develop a framework as a step towards bringing more reliable data with respect to review duration. Based on this framework, we implement a tool—journal response time (JRT), that allows for automatically extracting the review process data and helps researchers to find the average response times of journals, which can be used to study the duration of CS journals’ peer review process. The information is extracted as metadata from the published articles, when available. This study reveals that the response times publicly provided by publishers differ from the actual values obtained by JRT (e.g., for ten selected journals the average duration reported by publishers deviates by more than 500% from the actual average value calculated from the data inside the articles), which we suspect could be from the fact that, when calculating the aggregated values, publishers consider the review time of rejected articles too (including quick desk-rejections that do not require reviewers). © Akadémiai Kiadó, Budapest, Hungary 2020 |
abstract_unstemmed |
Abstract In various fields, scientific article publication is a measure of productivity and in many occasions it is used as a critical factor for evaluating researchers. Therefore, a lot of time is dedicated to writing articles that are then submitted for publication in journals. Nevertheless, the publication process in general and the review process in particular tend to be rather slow. This is the case for instance of computer science (CS) journals. Moreover, the process typically lacks in transparency, where information about the duration of the review process is at best provided in an aggregated manner, if made available at all. In this paper, we develop a framework as a step towards bringing more reliable data with respect to review duration. Based on this framework, we implement a tool—journal response time (JRT), that allows for automatically extracting the review process data and helps researchers to find the average response times of journals, which can be used to study the duration of CS journals’ peer review process. The information is extracted as metadata from the published articles, when available. This study reveals that the response times publicly provided by publishers differ from the actual values obtained by JRT (e.g., for ten selected journals the average duration reported by publishers deviates by more than 500% from the actual average value calculated from the data inside the articles), which we suspect could be from the fact that, when calculating the aggregated values, publishers consider the review time of rejected articles too (including quick desk-rejections that do not require reviewers). © Akadémiai Kiadó, Budapest, Hungary 2020 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-BUB SSG-OLC-HSW SSG-OPC-BBI GBV_ILN_4012 |
container_issue |
1 |
title_short |
A framework for assessing the peer review duration of journals: case study in computer science |
url |
https://doi.org/10.1007/s11192-020-03742-9 |
remote_bool |
false |
author2 |
Munir, Rana Faisal Abelló, Alberto |
author2Str |
Munir, Rana Faisal Abelló, Alberto |
ppnlink |
13005352X |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s11192-020-03742-9 |
up_date |
2024-07-03T15:38:52.721Z |
_version_ |
1803572879266152448 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">OLC212295986X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230505070513.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">230505s2020 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11192-020-03742-9</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC212295986X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11192-020-03742-9-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">050</subfield><subfield code="a">370</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">11</subfield><subfield code="2">ssgn</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Bilalli, Besim</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-0575-2389</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">A framework for assessing the peer review duration of journals: case study in computer science</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Akadémiai Kiadó, Budapest, Hungary 2020</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract In various fields, scientific article publication is a measure of productivity and in many occasions it is used as a critical factor for evaluating researchers. Therefore, a lot of time is dedicated to writing articles that are then submitted for publication in journals. Nevertheless, the publication process in general and the review process in particular tend to be rather slow. This is the case for instance of computer science (CS) journals. Moreover, the process typically lacks in transparency, where information about the duration of the review process is at best provided in an aggregated manner, if made available at all. In this paper, we develop a framework as a step towards bringing more reliable data with respect to review duration. Based on this framework, we implement a tool—journal response time (JRT), that allows for automatically extracting the review process data and helps researchers to find the average response times of journals, which can be used to study the duration of CS journals’ peer review process. The information is extracted as metadata from the published articles, when available. This study reveals that the response times publicly provided by publishers differ from the actual values obtained by JRT (e.g., for ten selected journals the average duration reported by publishers deviates by more than 500% from the actual average value calculated from the data inside the articles), which we suspect could be from the fact that, when calculating the aggregated values, publishers consider the review time of rejected articles too (including quick desk-rejections that do not require reviewers).</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Peer review process</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Review process duration</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Review process quality</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Munir, Rana Faisal</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Abelló, Alberto</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Scientometrics</subfield><subfield code="d">Springer International Publishing, 1978</subfield><subfield code="g">126(2020), 1 vom: 05. Nov., Seite 545-563</subfield><subfield code="w">(DE-627)13005352X</subfield><subfield code="w">(DE-600)435652-4</subfield><subfield code="w">(DE-576)015591697</subfield><subfield code="x">0138-9130</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:126</subfield><subfield code="g">year:2020</subfield><subfield code="g">number:1</subfield><subfield code="g">day:05</subfield><subfield code="g">month:11</subfield><subfield code="g">pages:545-563</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11192-020-03742-9</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-BUB</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-HSW</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OPC-BBI</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">126</subfield><subfield code="j">2020</subfield><subfield code="e">1</subfield><subfield code="b">05</subfield><subfield code="c">11</subfield><subfield code="h">545-563</subfield></datafield></record></collection>
|
score |
7.4018974 |