University of Leipzig AKSW Homepage | Blog |

Archiv für die Kategorie 'Software Releases'

RDFaCE 0.7 Schema.org edition released…

May 11, 2014 - 4:34 pm by AliKhalili - No comments »

In order to stay up to date with the current changes in TinyMCE, WordPress and Schema.org, we released a new version of our RDFaCE editor for semantic content authoring.

The new version of RDFaCE comes with the following main changes:

  • Compatibility with WordPress 3.9 and TinyMCE 4.0
  • Support of all existing DBpedia classes for automatic content annotation
  • Support for inline content editing using the HTML5 contenteditable attribute
  • Configuration for automatic annotation (confidence, markup format, entity types)
  • Some bug fixes

For more information about RDFaCE, please read this paper or watch our video about the WYSIWYM (What You See Is What You Mean) concept.

To try a demo of the new version, visit http://rdface.aksw.org/demo/

To download RDFaCE plugin for WordPress, visit http://wordpress.org/plugins/rdface/

AKSW Colloquium with NIF Release Preparation on Monday, February 10

February 7, 2014 - 2:23 pm by KonradHoeffner - No comments »

NIF Release Preparation

On Monday, February 10, at 1.30 pm in room P702 (Paulinum of the University of Leipzig main building at the Augustusplatz), Sebastian Hellmann will present the Natural Language Processing (NLP) Interchange Format (NIF) which is based on a Linked Data enabled URI scheme for identifying elements in (hyper-)texts and an ontology for describing common NLP terms and concepts. During the meeting we will jointly look at the existing tools and infrastructure, collect issues and discuss potential fixes. Bringing a laptop is recommended.

About the AKSW Colloquium

This event is part of a series of events about Semantic Web technology. Please see http://wiki.aksw.org/Colloquium for further information about previous and future events. As always, Bachelor and Master students are able to get points for attendance and there is complimentary coffee and cake after the session.

Abstract

We are currently observing a plethora of Natural Language Processing tools and services being made available. Each of the tools and services has its particular strengths and weaknesses, but exploiting the strengths and synergistically combining different tools is currently an extremely cumbersome and time consuming task. Also, once a particular set of tools is integrated, this integration is not reusable by others. We argue that simplifying the interoperability of different NLP tools performing similar but also complementary tasks will facilitate the comparability of results and the creation of sophisticated NLP applications. In this session, we present the NLP Interchange Format (NIF). NIF is based on a Linked Data enabled URI scheme for identifying elements in (hyper-)texts and an ontology for describing common NLP terms and concepts. In contrast to more centralized solutions such as UIMA and GATE, NIF enables the creation of heterogeneous, distributed and loosely coupled NLP applications, which use the Web as an integration platform. We present several use cases of the second version of the NIF specification (NIF 2.0) and the result of a developer study.

References:

Preview release of conTEXT for Linked-Data based text analytics

January 17, 2014 - 3:04 pm by AliKhalili - No comments »

We are happy to announce the preview release of conTEXT — a platform for lightweight text analytics using Linked Data.
conTEXT enables social Web users to semantically analyze text corpora (such as blogs, RSS/Atom feeds, Facebook, G+, Twitter or SlideWiki.org decks) and provides novel ways for browsing and visualizing the results.

conTEXT workflow

The process of text analytics in conTEXT starts by collecting information from the web. conTEXT utilizes standard information access methods and protocols such as RSS/ATOM feeds, SPARQL endpoints and REST APIs as well as customized crawlers for WordPress and Blogger to build a corpus of information relevant for a certain user. The assembled text corpus is then processed by Natural Language Processing (NLP) services (currently FOX and DBpedia-Spotlight) which link unstructured information sources to the Linked Open Data cloud through DBpedia. The processed corpus is then further enriched by de-referencing the  DBpedia URIs as well as  matching with pre-defined natural-language patterns for DBpedia predicates (BOA patterns). The processed data can also be joined with other existing corpora in a text analytics mashup. The creation of analytics mashups requires dealing with the heterogeneity of different corpora as well as the heterogeneity of different NLP services utilized for annotation. conTEXT employs NIF (NLP Interchange Format) to deal with this heterogeneity. The processed, enriched and possibly mixed results are presented to users using different views for exploration and visualization of the data. Additionally, conTEXT provides an annotation refinement user interface based on the RDFa Content Editor (RDFaCE) to enable users to revise the annotated results. User-refined annotations are sent back to the NLP services as feedback for the purpose of learning in the system.

For more information on conTEXT visit:

SlideWiki is now Open Source

September 24, 2013 - 11:45 pm by AliKhalili - One comment »

We are pleased to announce that we have just released the SlideWiki.org source code under the permissive Apache open-source license. It is now available for download from the AKSW Github repository at:
https://github.com/AKSW/SlideWiki

The SlideWiki database dumps are also available at:
http://slidewiki.org/db/

SlideWiki.org is a platform for OpenCourseWare authoring and enables communities of educators to author, share and re-use multilingual educational content in a truly collaborative way. By completely open-sourcing SlideWiki and giving the community access to all the content we aim at:

  • Providing open access to crowdsourced e-learning material to be authored, shared and reused.
  • Collaborating with other open-source projects to improve the quality of SlideWiki implementation.
  • Inviting developers to openly contribute to SlideWiki and to write customized plugins and themes for SlideWiki.
  • Providing offline access to SlideWiki system.

To read more about SlideWiki features, see:
http://slidewiki.org/documentation

On behalf of SlideWiki team,
Ali Khalili, Darya Tarasowa and Sören Auer

Preview release of RDFaCE special edition for Schema.org

April 15, 2013 - 4:12 pm by AliKhalili - 46 comments »

RDFaCEWe are happy to announce the preview release of our RDFaCE WYSIWYM content editor special edition for Schema.org. RDFaCE (RDFa Content Editor) extends the TinyMCE rich text editor to facilitate the authoring of semantic documents. This version of RDFaCE is customized for annotating content in RDFa or Microdata formats based on Schema.org vocabularies. It is also published as a WordPress plugin to promote semantic content authoring among a wide variety of end users.

The main features of RDFaCE Schema.org edition are:

  • Support of flexible form-based approach to annotate content using Schemas defined by Schema.org
  • Providing a Schema creator module which creates a subset of Schema.org schemas based on preferred user domain and requirements.
  • Providing a flexible color schemes for Schema.org schemas.
  • Support of RDFa as well as Microdata formats.
  • Support of automatic content annotation using external NLP APIs (Alchemy, Extractiv, Open Calais, Ontos, Evri, Saplo, Lupedia and DBpedia spotlight.).
  • This feature provides an initial set of annotations for users that can be modified and extended later on.
  • Combination of the results of multiple NLP APIs based on user preferences. This features improves the quality of automatic annotations.
  • One click editing of annotated entities.

For more information on RDFaCE visit: