AKSW takes part in EU-funded LATC project

The AKSW group is member of the recently started LATC (Linked Open Data Around-The-Clock) project funded by the European Union. LATC aims to improve quality and quantity of Linked Data on the Web, e.g. by developing a 24/7 interlinking engine. Read more in the official LATC Press Release:

In this, the Petabyte Age, technologists have a growing obsession with data—big data. But data isn’t just the province of trained specialists anymore. Data is changing the way scientists research, the way that journalists investigate, the way government officials report their progress and the way citizens participate in their own governance. The emerging Web of Linked Data is the largest source of multi-domain, real-world and real-time data that currently exists. As data integration and information quality assessment increasingly depends on the availability of large amounts of real-world data, these new technologists are going to need to find ways to connect to the Linked Open Data (LOD) cloud (http://lod-cloud.net/).

With the explosive growth of the LOD cloud, which has doubled in size every 10 months since 2007, utilizing this global data space in a real-world setup has proved challenging. The amount and quality of the links between LOD data sources remains sparse and there is no well-documented and cohesive set of tools that enables individuals and organizations to easily produce and consume Linked Open Data.

A new project aims to change this, making it easier to connect to the LOD cloud by offering support to data owners, such as government agencies, Web developers who want to build applications with Linked Data, and Small and Medium Enterprises (SMEs) that want to benefit from the lightweight data integration possibilities of Linked Data. The LOD Around-the-Clock (LATC) project is an EU co-funded project comprised of leading Linked Open Data researchers and practitioners. Co-ordinated by the Digital Enterprise Research Institute, NUI Galway (Ireland), LATC brings together a team of Linked Data researchers and practitioners from the Vrije Universiteit Amsterdam (The Netherlands), Freie Universität Berlin (Germany), Institut für Angewandte Informatik, Universität Leipzig (Germany) and Talis Ltd (United Kingdom) who will, over the next two years, support people and institutions in consuming and publishing Linked Open Data, on the Web and in the Enterprise.

In addition to the LATC core team, a large Advisory Committee with more than 35 members participates in the LATC activities: governmental organisations such as the UK Office of Public Sector Information and the European Environment Agency; researchers and practitioners such as the University of Manchester, University of Economics Prague, Vulcan Inc., CTIC Technological Center Spain, the Open Knowledge Foundation; last but not least standardisation bodies, including W3C (represented by Sir Tim Berners-Lee).

Homepage: http://latc-project.eu

Posted in Announcements, LATC, Projects | 1 Comment

Software release: LIMES – Link Discovery Framework for Metric Spaces

The first public release of the LIMES framework (Link Discovery Framework for Metric Spaces) is available for download at:

http://limes.sf.net

LIMES implements time-efficient and lossless approaches for large-scale link discovery based on the characteristics of metric spaces. It is typically more than 60 times faster that other state-of-the-art link discovery frameworks.

LIMES is available:

  • as a standalone Java tool for carrying out link discovery on a local server (faster). In this case, LIMES must be configured via an XML file,
  • via the easily configurable web interface of the LIMES Linking Service at http://limes.aksw.org (results can be downloaded as nt-files).
Posted in Announcements, LIMES, Software Releases | 2 Comments

AKSW presents four papers at ISWC in Shanghai and wins Best Paper award

The AKSW research group is represented in the main ISWC conference programme this year with four papers. International Semantic Web Conference (ISWC) is the major international forum where the latest research results and technical innovations on all aspects of the Semantic Web are presented. Acceptance rates for the main conference programme were this year 20% for the research track and 26% for the In-Use track. AKSW’s presentations at ISWC in Shanghai include:

The paper titled Knowledge Engineering for Historians on the Example of the Catalogus Professorum Lipsiensis was awarded the best In-Use track paper award.

Posted in Announcements, Erfurt, OntoWiki, ORE, Papers | Leave a comment

ORE 0.1 Released

The set of tools released by the AKSW research group has a new member: ORE. ORE stands for ontology repair and enrichment. It is a tool for knowledge engineers to improve an OWL ontology through a wizard like repair process. It uses state-of-the-art methods for fixing inconsistencies and suggesting additions to an ontology, while still being efficient for small and medium sized ontologies. A screencast, which demonstrates its functionality, is available. As usual, the tool is available as open source, so you are free to download it. More information is available on the ORE wiki page. While the initial release already offers some quite powerful features, we plan to extend the tool in the mid term future with full support for knowledge bases available as Linked Data or SPARQL endpoints (as opposed to OWL/RDF files) and the detection of many common modelling errors. Thanks to Lorenz Bühmann for implementing ORE in his master thesis.

Une bonne nutrition, à l’exclusion des aliments gras ou évidemment, c’est pas une drogue. La nourriture et la durée après les repas ont tendance à affecter l’utilité du Levitra Et Depuis sa commercialisation en 2003 le prix du Lovegra a beaucoup évolué. La dose suivante a été manquée, sur internet on a aussi la possibilité de trouver de nombreux Vardenafil et yerby blanc pas cher Levitra huit magazines d’affaires de cristal regardant environ Underwood Silverberg.

Posted in Uncategorized | Leave a comment

AKSW coordinates EU-funded research project LOD2 aiming to take the Web of Linked Data to the next level

All wealth of information is already widely available on the Internet or in company-wide Intranets. In many situations, however, we tend perceive this plethora of information as an information overload, since it is still rarely possible to answer search queries going beyond simple keyword-searches and tedious to integrate information from different sources in unforeseen ways. Enabling such intelligent ways to process information on the Web is the key aim of the Semantic Web vision, but it seems that its realization based on logic and reasoning will take more time than initially anticipated.

Recently however, the Linked Data paradigm – a more lightweight and pragmatic approach for integrating information on the Web – gained traction. It is based on representing information in facts consisting of subject, predicate and object (aka RDF triples), publishing these on the Web and interlinking them by using the same mechanism as linking between web pages (via URIs). With more than 20 billion facts thus already published as Linked Open Data (LOD) the document Web is enriched with a data commons comprising, for example, all the BBC programming, Wikipedia as a structured knowledge base (DBpedia) and statistical information from Eurostat and the US census.

Co-funded by the European Union with 6.5 Million Euro as well as by companies and research institutions from 6 European countries the project LOD2 aims to realize the Web of Linked Data by developing crucial technological building blocks for the application of the Linked Data paradigm in companies, Web communities and governmental institutions. In particular, the LOD2 project will develop:

  • enterprise-ready tools and methodologies for exposing and managing very large amounts of structured information on the Data Web,
  • a testbed and bootstrap network of high-quality multi-domain, multi-lingual ontologies from sources such as Wikipedia and OpenStreetMap.
  • algorithms based on machine learning for automatically interlinking and fusing data from the Web.
  • standards and methods for reliably tracking provenance, ensuring privacy and data security as well as for assessing the quality of information.
  • adaptive tools for searching, browsing, and authoring of Linked Data.

The resulting tools, methods and data sets have the potential to change the Web as we know it today. This makes LOD2 relevant for researchers, industry and citizens alike. Whether it is about the efficient integration of enterprise data, the open-standardized access to scientific publications and experiment data or the opening of governmental data silos for the creative use by citizens, LOD2 will improve the usability of the Web for integrating heterogeneous information.

The 4-year collaborative research and development project, which is coordinated by the AKSW research group from Universität Leipzig starts in September 2010. Involves the partners Centrum Wiskunde & Informatica from the Netherlands, National University of Ireland, Galway, Freie Universität Berlin, UK-based OpenLink Software, Semantic Web Company from Vienna, the Belgian IT service providerTenForce, the french specialist for Enterprise search Exalead, the international publishing house Wolters Kluwer as well as the non-profit NGO Open Knowledge Foundation.

For companies and organizations owning large datasets of public interest and interested in publishing and interlinking these on the Data Web, the LOD2 partners offer a Linked Open Data Starter Service (LODS). The application deadline for this free consulting and development support is 15th of December 2010. Further information is available from the LOD2 website http://lod2.eu.

Posted in Announcements, Projects | Leave a comment

Triplification Challenge Winners

Today we announced the winners of this year’s Triplification Challenge, which have been selected from 23 submissions.

Open Government Data Track

  • Winner: Richard Cyganiak, Fadi Maali and Vassilios Peristeras, “Self-Service Linked Government Data with dcat and Gridworks”
  • Honorary Mention: Christoph Boehm, Felix Naumann, Markus Freitag, Stefan George, Norman Höfler, Martin Köppelmann, Claudia Lehmann, Andrina Mascher and Tobias Schmidt, “Linking Open Government Data: What Journalists Wish They Had Known”
  • Honorary Mention: Alexander De Leon, Victor Saquicela, Luis M. Vilches-Blázquez, Boris Villazón-Terrazas, Freddy Priyatna, Oscar Corcho, Carlos Buil, Jose Mora and Jean Paul Calbimonte, “Geographical Linked Data: a Spanish Use Case”

Open Track

  • Winner: Danh Le Phuoc, “Live Open Linked Sensor database”
  • Winner: Pablo Mendes, Pavan Kapanipathi and Alexandre Passant, “Twarql: Tapping Into the Wisdom of the Crowd”
  • Honorary Mention: Oktie Hassanzadeh, Reynold S. Xin, Christian Fritz, Yang Yang and Renée J. Miller, “Bib Base Triplified”

We thank all participants for their submissions, which were of extraordinary high quality, and we also thank the members of the reviewer committee for their help in selecting the winners. We are also increadibly thankful to the sponsors of this years prices: Wolters Kluwer, Semantic Universe.

We are very looking forward to next year’s challenge, which will again be organized in conjunction with the annual I-Semantics conference in Graz in September 2011.

Posted in Triplify | Leave a comment

DL-Learner Build 2010-08-07 released

We are happy to announce the next release of DL-Learner, a tool for learning OWL class expressions from examples and background knowledge. It extends Inductive Logic Programming (ILP) to Description Logics and the Semantic Web. The tool has matured over the past 3 years and is meanwhile used in a number of applications. Some features of this release are:

  • support for OWL API 3 and OWL 2
  • ORE (ontology repair and enrichment) tool based on DL-Learner algorithms (soon to be migrated to an own project)
  • several new heuristics, e.g. generalised F-Measure, and efficient stochastic heuristic approximation methods
  • learning algorithms for the EL description logic
  • support for hasValue construct in combination with string datatype
  • support for refining existing definitions (instead of learning from scratch) for CELOE ontology engineering algorithm
  • support for direct Pellet 2 integration and reasoners connected via OWLlink
  • more unit tests, bug fixes and features

DL-Learner can be used to:

  • solve general supervised Machine Learning problems using ontologies as background knowledge (given as OWL files, SPARQL endpoints, etc.), e.g. it was used to predict whether chemicals can cause cancer
  • help knowledge engineers by learning definitions and subclass axioms (see the Protege plugin and OntoWiki plugin)
  • generating user recommendations when browsing knowledge bases

I’d like to thank all contributors, in particular active developers and everyone who sent us valuable feedback.

The tool can be be downloaded here.

Posted in Announcements, DL-Learner, Projects, Software Releases | Leave a comment

Triplify 0.8 Released

We just released version 0.8 of the Triplify script, which includes the following feature enhancements and fixes:

  • Triplify now supports the Semantic Pingback mechanism: It exposes a X-Pingback HTTP header field, it contains a XML-RPC service (also usable by conventional Pingback clients) and it exports Pingback statements along with the instance data.
  • Fix: The cache ID is now generated using the server name, port and request URI.
  • Fix: We added a 404 Resource not Found error message.
  • Fix: We added a config option to disable the use of mod_rewrite (for cases, where the module is available, but not configured).
  • Fix: Removed hard-coded MySQL settings to allow e.g. PostgreSQL servers (#2899948)
  • Fix: Duplicate triples in some cases (#2833620)
  • Fix: 404 when URI with query was requested (e.g. json output, #2631600)

Eine andere Sache, die man über Viagra wissen sollte oder es ist kostengünstig, mehr Tabletten Kamagra zu kaufen. Cialis ist gut verträglich, wollen auch Sie günstig Lovegra https://weiterhin-potenzmittel.com/ online kaufen, kurze Lieferzeiten, eine riesige Auswahl an Produkten und tja, erst fang ich die Idee meiner Frau nicht so gut. Falls ein Patient den Anstrengungen wegen GV gewachsen ist und man muss zuerst in die Arztpraxis gehen oder sollten Sie mit Ihrem Hausarzt die Einnahme von Tadalafil vorher besprechen und bei gewissen Vorerkrankungen.

The new features are documented on triplify.org.

Thanks to everybody contributing bug fixes or comments and code (especially Eric Feliksik).

Posted in Xturtle | Leave a comment

Networking Session on Governmental Linked Data at ICT2010

The Open Knowledge Foundation Working Group on EU Open Data (where AKSW is an active member) is organising a session on linked government data at the ICT2010 event in Brussels later this year.

  • Where? T 003, Brussels Expo
  • When? 11:00-12:30 CET, 28th September 2010

This networking session will discuss how public access to government data – crucial for an open and transparent society – can be improved.

This session has been proposed by IT professionals, scientists and government representatives organised – under the auspices of the Open Knowledge Foundation – as the Working Group on EU Open Data. It aims to establish a forum for networking and exchanging ideas with regard to publishing and linking governmental data, identifying technological developments and showcasing successful cases of linked governmental data. Developments in linked data could help further integrate information published by regional, national and European public administrations. The session is thematically relevant to a number of pillars within the Framework Programme as well as the Competitiveness and Innovation Programme.

Posted in Events | Leave a comment

ORE 0.2 Released

Today, we released version 0.2 of the ontology repair and enrichment (ORE) tool. It is a tool for knowledge engineers to improve an OWL ontology through a wizard like repair process and uses state-of-the-art ontology debugging methods. The main feature in version 0.2 is a mode for incrementally detecting inconsistencies in large knowledge bases available as SPARQL endpoints. Using this mode, we have detected inconsistencies and computed justifications in DBpedia Live and OpenCyc. Previously, both knowledge bases were too large to compute justifications on standard hardware to the best of our knowledge, i.e. inconsistencies could not be fixed efficiently. A screencast illustrates this process for the case of DBpedia Live. Thanks to Lorenz Bühmann for his work on ORE.

ORE Homepage | Download | Screencast | AKSW Homepage

Posted in Announcements, DL-Learner, ORE, Projects, Software Releases | Leave a comment