Julian Risch and Ralf Krestel
Our article "Measuring and Facilitating Data Repeatability in Web Science" has been accepted for publication in the Datenbank-Spektrum Journal. The article is part of our ongoing research on comment classification and will appear in a special issue on data and repeatability. A pre-print of the journal article can be found here: Link. The final version is published with Springer: Link.
Accessible and reusable datasets are a necessity to accomplish repeatable research. This requirement poses a problem particularly for web science, since scraped data comes in various formats and can change due to the dynamic character of the web. Further, usage of web data is typically restricted by copyright-protection or privacy regulations, which hinder publication of datasets.
To alleviate these problems and reach what we de- fine as “partial data repeatability”, we present a process that consists of multiple components. Researchers need to distribute only a scraper and not the data itself to comply with legal limitations. If a dataset is re-scraped for repeatability after some time, the integrity of different versions can be checked based on fingerprints. Moreover, fingerprints are sufficient to identify what parts of the data have changed and how much.
We evaluate an implementation of this process with a dataset of 250 million online comments collected from five different news discussion platforms. We re-scraped the dataset after pausing for one year and show that less than ten percent of the data has actually changed. These experiments demonstrate that providing a scraper and fingerprints enables recreating a dataset and supports the repeatability of web science experiments.