The Semantic Web aims at making information available in a form that is understandable and automatically manageable by machines. Ontologies are engineering artifacts used to this purpose, and Description Logics (DLs) are a family of logic-based languages particularly suitable for modeling (and reasoning upon) ontologies and the Semantic Web. Great effort has been spent in identifying decidable or even tractable DLs. Efficient DLs reasoners have been implemented in procedural, object-oriented languages or Prolog.
Nonetheless, incompleteness or uncertainty is intrinsic of much information on the World Wide Web. This motivated the research in Probabilistic DLs, some of which derived from approaches from the Logic Programming area. Conversely, for knowledge representation and reasoning, integration with rules and rule-based reasoning is also crucial in the so-called Semantic Web stack vision.
In this talk, I will focus on probabilistic DLs. First, I will briefly overview DLs and reasoning systems. After recalling the Distribution Semantics from Probabilistic Logic Programming, I will show how it and other probabilistic approaches have been applied to DLs, and what inference systems for Probabilistic DLs are available. Learning Probabilistic DL theories is also an interesting issue. A demo of a Web-based system for Probabilistic DLs implemented in SWI Prolog, and SWISH, will conclude this part of the talk.
A further research activity has been conducted by the AI and LP community in order to facilitate the integration of DL theories with rules and rule-based reasoning, since this is also crucial in the Semantic Web. Proposals like Datalog+/- and its extensions, ASP-based systems or Abductive Logic Programming for modeling and reasoning upon ontologies, are significant attempts which should be considered seriously by the LP community. I will briefly mention these approaches, ending the talk.
Following a general trend in artificial intelligence, the fields machine learning and data mining have recently witnessed a growing interest in the use of declarative techniques. What is essential in this paradigm is that the user be provided with a way to declaratively specify what the problem is rather than having to outline how that solution needs to be computed. This corresponds to a model + solver-based approach in which the user specifies the problem in a high level modelling language and the system automatically transforms such models into a format that can be used by a solver to efficiently generate a solution. This should be much easier for the user than having to implement or adapt an algorithm that computes a particular solution to a specific problem. Therefore, declarative methods could have a radical impact on the fields of machine learning and data mining.
In this talk, I shall report on this new trend in machine learning and data mining from two different perspectives. The first is that of a user of existing declarative methods such as constraint programming and answer set programming, where I shall report on experiences, successes and challenges with using this type of technology. The second is that of a developer of declarative languages and solvers for machine learning and data mining, where I shall provide a gentle introduction to different types of languages such as inductive query languages, which extend database query languages with primitives for mining and learning, modelling languages for constraint-based mining, and probabilistic and other programming languages that support machine learning.
More than 25 years ago together with Siemens we started to investigate the possibility of substituting the classical rule-based configuration approach by model-based techniques. It turned out that in those days only constrained programming (CP) had any real chance of meeting the application demands. By exploiting CP we were able to significantly improve the productivity of highly trained employees (by more than 300%) and to substantially reduce software development and maintenance costs (by more than 80%). Consequently, CP has been our method of choice for problem solving in industrial projects since 1989.
Some years ago, we started to investigate answer set programming (ASP) techniques, mainly because of the possibility to apply a very expressive logical first-order language for specifying problems. It emerged that, by using simply problem encoding, we were able to solve difficult real world problem instances witnessing the enormous improvements of logic programming over the last decades.
Although ASP and CP have proven their practical applicability, we will point out challenges of large problems of the electronic and the semiconductor industry. In particular, we will stress the power of problem-specific heuristics which turned out to be the key in many applications of problem solvers.
Looking at the famous equation "algorithm = logic + control" most of the current work in the AI community assumes that control should be problem-independent and only the logical specification depends on the problem to be solved, i.e. "algorithm = logic(problem) + control". It is not surprising that for the current problem solving technology this is a practical approach up to a certain size of the problem instances, since we deal with NP-hard problems in many cases. However, it is observed (and examples are given) that problem-specific heuristics allow enormous run-time improvements. This success is based on problem-specific control, i.e. "algorithm = logic(problem) + control(problem)". Unfortunately, the design of such problem-specific heuristics is very time-consuming and redesigns are frequently required because of recurrent changes of the problem. Interestingly, humans are very successful at developing such problem-specific heuristics. Therefore, we argue that the automation of generating problem-specific heuristics with satisfying quality is still an important basic AI research goal with high practical impact that should be achievable.
This invited talk is sponsored by the European Coordinating Committee for Artificial Intelligence (ECCAI).
In this tutorial we show how constraint logic programs (CLP) provide a flexible framework for analysis and verification of other languages. Here the focus is on analysing imperative programming languages. It is first necessary to translate a given imperative program into CLP clauses. Different approaches to automatic translation based on small-step or big-step semantics will be shown, along with their advantages and disadvantages. The role of query-answer transforms in the translation process is also presented. The problem of analysing or verifying properties of an imperative program is thus translated into a CLP analysis or verification problem. The main CLP analysis technique covered in this tutorial is the computation of approximate models of CLP clauses using abstract interpretation over numeric or symbolic abstract domains. The tutorial contains a survey of analysis and verification problems for imperative programs that have been successfully tackled in this way, including verification of safety properties in sequential and concurrent programs, termination, resource analysis and shape analysis.
Logic Programming (LP) and the family of Description Logics (DLs) are both based on fragments of First Order Logic (FOL). However, they are characterized by very different semantic assumptions. Yet, a partial overlap exists between LP and DLs which allows the extension and/or adaptation of known results in LP to DLs and viceversa. Even more interestingly, a combination of the two is possible via several integration schemes that are aimed at designing very expressive FOL languages and ultimately overcoming the aforementioned semantic mismatch between LP and DLs. Several works in Inductive Logic Programming (ILP) testify the great potential of these hybrid knowledge representation formalisms also from the perspective of machine learning and inductive reasoning.
In this tutorial talk I will survey the literature of the last 20 years concerning the combination of (I)LP and DLs with a particular emphasys on the integration issues. We will see how many interesting things happen or could happen along the borders of LP with DLs. I hope you will enjoy the brief guided tour!
Datalog+/- is a recently introduced family of expressive extensions of Datalog for knowledge representation and reasoning. In particular, Datalog+/- allows for representing ontological axioms and for query answering under such axioms. The Datalog+/- languages are derived from Datalog by allowing existentially quantified variables, equality, and the falsum in rule heads, and, at the same time, by enforcing suitable restrictions to achieve decidability and tractability. I will give a general overview of the Datalog+/- family of languages, including complexity results for query answering, main application areas, as well as extensions for handling inconsistencies, probabilistic uncertainty, and preferences.
Abstract solvers are a recently employed method to formally describe, compare and combine solving algorithms, where the states of computation are represented as nodes of a graph, the solving techniques as edges between such nodes, the solving process as a path in the graph and the formal properties of the algorithms are reduced to related graph properties.
In this tutorial I overview the application of abstract solvers in Answer Set Programming (ASP). After an introduction devoted to an abstract solver for SAT solving, I show abstract solvers for ASP procedures for non-disjunctive programs; then, by building on the resulting graphs, I move to ASP procedures for disjunctive programs. Next, abstract solvers for cautious reasoning are presented. Finally, I briefly touch the usage of abstract solvers in other research fields, such as Quantified SAT, Constraint ASP and Abstract Argumentation Frameworks.