University of Leipzig AKSW Homepage | Blog |

Archiv für die Kategorie 'Software Releases'

DL-Learner 1.0 (Supervised Structured Machine Learning Framework) Released

February 13, 2015 - 10:38 am by Jens Lehmann - No comments »

Dear all,

we are happy to announce DL-Learner 1.0.

DL-Learner is a framework containing algorithms for supervised machine learning in RDF and OWL. DL-Learner can use various RDF and OWL serialization formats as well as SPARQL endpoints as input, can connect to most popular OWL reasoners and is easily and flexibly configurable. It extends concepts of Inductive Logic Programming and Relational Learning to the Semantic Web in order to allow powerful data analysis.

GitHub page:

DL-Learner is used for data analysis in other tools such as ORE and RDFUnit. Technically, it uses refinement operator based, pattern based and evolutionary techniques for learning on structured data. For a practical example, see It also offers a plugin for Protege, which can give suggestions for axioms to add. DL-Learner is part of the Linked Data Stack – a repository for Linked Data management tools.

We want to thank everyone who helped to create this release, in particular (alphabetically) An Tran, Chris Shellenbarger, Christoph Haase, Daniel Fleischhacker, Didier Cherix, Johanna Völker, Konrad Höffner, Robert Höhndorf, Sebastian Hellmann and Simon Bin. We also acknowledge support by the recently started SAKE project, in which DL-Learner will be applied to event analysis in manufacturing use cases, as well as the GeoKnow and Big Data Europe projects where it is part of the respective platforms.

Kind regards,

Lorenz Bühmann, Jens Lehmann and Patrick Westphal

RDFaCE 0.7 edition released…

May 11, 2014 - 4:34 pm by AliKhalili - No comments »

In order to stay up to date with the current changes in TinyMCE, WordPress and, we released a new version of our RDFaCE editor for semantic content authoring.

The new version of RDFaCE comes with the following main changes:

  • Compatibility with WordPress 3.9 and TinyMCE 4.0
  • Support of all existing DBpedia classes for automatic content annotation
  • Support for inline content editing using the HTML5 contenteditable attribute
  • Configuration for automatic annotation (confidence, markup format, entity types)
  • Some bug fixes

For more information about RDFaCE, please read this paper or watch our video about the WYSIWYM (What You See Is What You Mean) concept.

To try a demo of the new version, visit

To download RDFaCE plugin for WordPress, visit

AKSW Colloquium with NIF Release Preparation on Monday, February 10

February 7, 2014 - 2:23 pm by KonradHoeffner - No comments »

NIF Release Preparation

On Monday, February 10, at 1.30 pm in room P702 (Paulinum of the University of Leipzig main building at the Augustusplatz), Sebastian Hellmann will present the Natural Language Processing (NLP) Interchange Format (NIF) which is based on a Linked Data enabled URI scheme for identifying elements in (hyper-)texts and an ontology for describing common NLP terms and concepts. During the meeting we will jointly look at the existing tools and infrastructure, collect issues and discuss potential fixes. Bringing a laptop is recommended.

About the AKSW Colloquium

This event is part of a series of events about Semantic Web technology. Please see for further information about previous and future events. As always, Bachelor and Master students are able to get points for attendance and there is complimentary coffee and cake after the session.


We are currently observing a plethora of Natural Language Processing tools and services being made available. Each of the tools and services has its particular strengths and weaknesses, but exploiting the strengths and synergistically combining different tools is currently an extremely cumbersome and time consuming task. Also, once a particular set of tools is integrated, this integration is not reusable by others. We argue that simplifying the interoperability of different NLP tools performing similar but also complementary tasks will facilitate the comparability of results and the creation of sophisticated NLP applications. In this session, we present the NLP Interchange Format (NIF). NIF is based on a Linked Data enabled URI scheme for identifying elements in (hyper-)texts and an ontology for describing common NLP terms and concepts. In contrast to more centralized solutions such as UIMA and GATE, NIF enables the creation of heterogeneous, distributed and loosely coupled NLP applications, which use the Web as an integration platform. We present several use cases of the second version of the NIF specification (NIF 2.0) and the result of a developer study.


Preview release of conTEXT for Linked-Data based text analytics

January 17, 2014 - 3:04 pm by AliKhalili - No comments »

We are happy to announce the preview release of conTEXT — a platform for lightweight text analytics using Linked Data.
conTEXT enables social Web users to semantically analyze text corpora (such as blogs, RSS/Atom feeds, Facebook, G+, Twitter or decks) and provides novel ways for browsing and visualizing the results.

conTEXT workflow

The process of text analytics in conTEXT starts by collecting information from the web. conTEXT utilizes standard information access methods and protocols such as RSS/ATOM feeds, SPARQL endpoints and REST APIs as well as customized crawlers for WordPress and Blogger to build a corpus of information relevant for a certain user. The assembled text corpus is then processed by Natural Language Processing (NLP) services (currently FOX and DBpedia-Spotlight) which link unstructured information sources to the Linked Open Data cloud through DBpedia. The processed corpus is then further enriched by de-referencing the  DBpedia URIs as well as  matching with pre-defined natural-language patterns for DBpedia predicates (BOA patterns). The processed data can also be joined with other existing corpora in a text analytics mashup. The creation of analytics mashups requires dealing with the heterogeneity of different corpora as well as the heterogeneity of different NLP services utilized for annotation. conTEXT employs NIF (NLP Interchange Format) to deal with this heterogeneity. The processed, enriched and possibly mixed results are presented to users using different views for exploration and visualization of the data. Additionally, conTEXT provides an annotation refinement user interface based on the RDFa Content Editor (RDFaCE) to enable users to revise the annotated results. User-refined annotations are sent back to the NLP services as feedback for the purpose of learning in the system.

For more information on conTEXT visit:

SlideWiki is now Open Source

September 24, 2013 - 11:45 pm by AliKhalili - One comment »

We are pleased to announce that we have just released the source code under the permissive Apache open-source license. It is now available for download from the AKSW Github repository at:

The SlideWiki database dumps are also available at: is a platform for OpenCourseWare authoring and enables communities of educators to author, share and re-use multilingual educational content in a truly collaborative way. By completely open-sourcing SlideWiki and giving the community access to all the content we aim at:

  • Providing open access to crowdsourced e-learning material to be authored, shared and reused.
  • Collaborating with other open-source projects to improve the quality of SlideWiki implementation.
  • Inviting developers to openly contribute to SlideWiki and to write customized plugins and themes for SlideWiki.
  • Providing offline access to SlideWiki system.

To read more about SlideWiki features, see:

On behalf of SlideWiki team,
Ali Khalili, Darya Tarasowa and Sören Auer