Salem Benferhat
Centre de Recherche en Informatique de Lens, University of Artois (France)
Personal webpage:
http://www.cril.univ-artois.fr/~benferhat/
Keynote talk: A practical handling of conflicting ontologies
- This talk addresses the problem of inconsistency-tolerant query answering in ontologies. The first part sets up a general framework in which inconsistency-tolerant query answering is seen as being made out of a composite modifier, which transforms the original ABox (i.e a set of facts) into an MBox (multiple ABoxes), and an inference strategy, which evaluates queries against an MBox knowledge base (i.e. a set of rules plus an MBox). Based on this framework, eight genuine ways to deal with inconsistency have been identified in the context of DL-Lite ontologies. The second part focuses on practical (or tractable) strategies to deal with inconsistency when the ABox is prioritized. We propose in particular a set of desirable properties that any rational inconsistency-tolerant semantics should naturally satisfy. These properties lead to identify some strategies for selecting a unique preferred repair from an inconsistent and prioritized assertional ABox. Finally, I briefly present a case study on ontology-based modeling of Vietnamese traditional dance videos.
- This is a joint work with Zied Bouraoui, Truong-Thanh Ma and Karim Tabia and has received support from the European RISE Projectc Aniage (High Dimensional Heterogeneous Data based Animation Techniques for Southeast Asian Intangible Cultural Heritage Digital Content).
Georg Gottlob
University of Oxford (United Kingdom)
Personal webpage:
http://www.cs.ox.ac.uk/people/georg.gottlob/index.html
Keynote talk: Swift Logic for Big Data and Knowledge Graphs
- Many modern companies wish to maintain knowledge in the form of a corporate knowledge graph and to use and manage this knowledge via a knowledge graph management system (KGMS). We formulate various requirements for a fully-fledged KGMS. In particular, such a system must be capable of performing complex reasoning tasks but, at the same time, achieve efficient and scalable reasoning over Big Data with an acceptable computational complexity. Moreover, a KGMS needs interfaces to corporate databases, the web, and machine learning and analytics packages. We present KRR formalisms and a system achieving these goals, and give examples of applications where machine learning and logical reasoning complement each other.
Dominik Ślęzak
University of Warsaw (Poland)
Personal webpage:
http://www.roughsets.org/fellows/dominik-slezak/
Keynote talk: Toward Approximate Intelligence: A Rough Set Perspective
- It is evident that humans do not operate with precise information in decision-making and thus, it might be unnecessary to provide them with precise outcomes of reasoning, modeling or analytical processes. Consequently, the question arises whether approximate results of computations or, for instance, results derived from the approximate data could be delivered more efficiently than their standard counterparts. Such questions are similar to those about the precision of calculations conducted by ML and KDD methods, whereby heuristic algorithms could be boosted by letting them rely on approximate computations. This leads us toward a discussion of the importance of approximations in the areas of machine intelligence business intelligence and, generally, the meaning of approximate derivations for various aspects of AI.
This talk provides a few illustrations for the above discussion, with a special focus on the rough set and information granulation approaches. We begin with a case study of attribute selection understood as extraction of multiple approximate data models. We recall our earlier experiences with incorporating attribute selection tools into coal-mine monitoring systems and refer to some of our current projects.
The second case study is about an approximate database engine – deployed in the cyber-security industry – which works on granulated data summaries. Herein, query operations take the form of fast transformations of input summaries into summaries reflecting the output data. We show how rough set principles helped us to reach a good accuracy of query results. We also discuss how to design similar tools for ML purposes.
In the third case study, we follow paradigms of information granulation in order to create a software library which helps developers to create AI agents for eSport games. The idea is to let humans encode basic game-related concepts which can be used by intelligent bots in a simplified abstraction of a game. We refer also to another project in which a similar layer of intuitively defined game-related concepts is designed for an opposite purpose – to advise players how they can improve their skills basing on analysis of their past playouts. In both cases, the ability to approximate “real game world” is the key to success.