Crowd-based Multi-Predicate Screening of Papers in Literature Reviews

Bibliografiske detaljer
Parent link:World Wide Web Conference (WWW 2018).— 2018.— [P. 55-64]
Hovedforfatter: Krivosheev E. Evgeny
Institution som forfatter: Национальный исследовательский Томский политехнический университет Школа инженерного предпринимательства
Andre forfattere: Casati F. Fabio, Benatallah B. Boualem
Summary:Title screen
Systematic literature reviews (SLRs) are one of the most commonand useful form of scientific research and publication. Tens of thousands of SLRs are published each year, and this rate is growingacross all fields of science. Performing an accurate, complete andunbiased SLR is however a difficult and expensive endeavor. Thisis true in general for all phases of a literature review, and in particular for the paper screening phase, where authors filter a set ofpotentially in-scope papers based on a number of exclusion criteria.To address the problem, in recent years the research communityhas began to explore the use of the crowd to allow for a faster, accurate, cheaper and unbiased screening of papers. Initial results showthat crowdsourcing can be effective, even for relatively complexreviews. In this paper we derive and analyze a set of strategies for crowdbased screening, and show that an adaptive strategy, that continuously re-assesses the statistical properties of the problem to minimize the number of votes needed to take decisions for each paper,significantly outperforms a number of non-adaptive approachesin terms of cost and accuracy. We validate both applicability andresults of the approach through a set of crowdsourcing experiments, and discuss properties of the problem and algorithms thatwe believe to be generally of interest for classification problemswhere items are classified via a series of successive tests (as it oftenhappens in medicine).
Sprog:engelsk
Udgivet: 2018
Fag:
Online adgang:https://doi.org/10.1145/3178876.3186036
Format: Electronisk Book Chapter
KOHA link:https://koha.lib.tpu.ru/cgi-bin/koha/opac-detail.pl?biblionumber=659541

MARC

LEADER 00000naa0a2200000 4500
001 659541
005 20250827110702.0
035 |a (RuTPU)RU\TPU\network\28146 
090 |a 659541 
100 |a 20190226d2018 k||y0engy50 ba 
101 0 |a eng 
135 |a drcn ---uucaa 
181 0 |a i  
182 0 |a b 
200 0 |a Crowd-based Multi-Predicate Screening of Papers in Literature Reviews  |f E. Krivosheev, F. Casati, B. Benatallah 
203 |a Text  |c electronic 
300 |a Title screen 
330 |a Systematic literature reviews (SLRs) are one of the most commonand useful form of scientific research and publication. Tens of thousands of SLRs are published each year, and this rate is growingacross all fields of science. Performing an accurate, complete andunbiased SLR is however a difficult and expensive endeavor. Thisis true in general for all phases of a literature review, and in particular for the paper screening phase, where authors filter a set ofpotentially in-scope papers based on a number of exclusion criteria.To address the problem, in recent years the research communityhas began to explore the use of the crowd to allow for a faster, accurate, cheaper and unbiased screening of papers. Initial results showthat crowdsourcing can be effective, even for relatively complexreviews. In this paper we derive and analyze a set of strategies for crowdbased screening, and show that an adaptive strategy, that continuously re-assesses the statistical properties of the problem to minimize the number of votes needed to take decisions for each paper,significantly outperforms a number of non-adaptive approachesin terms of cost and accuracy. We validate both applicability andresults of the approach through a set of crowdsourcing experiments, and discuss properties of the problem and algorithms thatwe believe to be generally of interest for classification problemswhere items are classified via a series of successive tests (as it oftenhappens in medicine). 
463 |t World Wide Web Conference (WWW 2018)  |o proceedings, Lyon, France, April 23-27, 2018  |v [P. 55-64]  |d 2018 
610 1 |a труды учёных ТПУ 
610 1 |a электронный ресурс 
610 1 |a human computation 
610 1 |a classification 
610 1 |a literature reviews 
610 1 |a вычисления 
610 1 |a классификации 
610 1 |a обзоры литературы 
700 1 |a Krivosheev  |b E.  |g Evgeny 
701 1 |a Casati  |b F.  |c Italian economist and Professor at the University of Trento (Italy)  |c Professor of Tomsk Polytechnic University, candidate of technical Sciences  |f 1971-  |g Fabio  |3 (RuTPU)RU\TPU\pers\39820 
701 1 |a Benatallah  |b B.  |g Boualem 
712 0 2 |a Национальный исследовательский Томский политехнический университет  |b Школа инженерного предпринимательства  |c (2017- )  |3 (RuTPU)RU\TPU\col\23544 
801 2 |a RU  |b 63413507  |c 20190226  |g RCR 
856 4 |u https://doi.org/10.1145/3178876.3186036 
942 |c CF