Dear all,
We are happy to announce GERBIL – a General Entity Annotation Benchmark Framework, a demo can be found at! With GERBIL, we aim to establish a highly available, easy quotable and liable focal point for Named Entity Recognition and Named Entity Disambiguation (Entity Linking) evaluations:
- GERBIL provides persistent URLs for experimental settings. By these means, GERBIL also addresses the problem of archiving experimental results.
- The results of GERBIL are published in a human-readable as well as a machine-readable format. By these means, we also tackle the problem of reproducibility.
- GERBIL provides 11 different datasets and 9 different entity annotators. Please talk to us if you want to add yours.
To ensure that the GERBIL framework is useful to both end users and tool developers, its architecture and interface were designed with the following principles in mind:
- Easy integration of annotators: We provide a web-based interface that allows annotators to be evaluated via their NIF-based REST interface. We provide a small NIF library for an easy implementation of the interface.
- Easy integration of datasets: We also provide means to gather datasets for evaluation directly from data services such as DataHub.
- Extensibility: GERBIL is provided as an open-source platform that can be extended by members of the community both to new tasks and different purposes.
- Diagnostics: The interface of the tool was designed to provide developers with means to easily detect aspects in which their tool(s) need(s) to be improved.
- Portability of results: We generate human- and machine-readable results to ensure maximum usefulness and portability of the results generated by our framework.
We are looking for your feedback!
Best regards,
Ricardo Usbeck for The GERBIL Team