Entity resolution (ER) is an essential step in data cleaning pipelines. It aims to detect and consolidate multiple records that refer to the same real-world entity. This topic has been studied in the literature for more than 50 years, but many challenges remain. One of them is the lack of consolidated benchmarks for evaluating and comparing ER approaches. This lack is exacerbated by the fact that state-of-the-art ER approaches are based on supervised learning methods, which are particularly data-hungry. More specifically: (i) There is no centralized repository of ER benchmarks. Rather, they are fragmented across multiple websites (e.g., [1, 2, 3, 4, 5], to list a few). (ii) Different versions of the same benchmark exist, so comparing multiple approaches requires selecting the correct dataset versions and understand how these versions were created by transforming the original data.