These are our current research projects and general research areas of interest.
RLeap
Two of the main research threads in AI revolve around the development of data-based learners capable of inferring behavior and functions from experience and data, and model-based solvers capable of tackling well-defined but intractable models like SAT, classical planning, and Bayesian networks. Learners, and in particular deep learners, have achieved considerable success but result in black boxes that do not have the flexibility, transparency, and generality of their model-based counterparts. Solvers, on the the hand, require models which are hard to build by hand. The RLeap project aims at achieving an integration of learners and solvers in the context of planning by addressing the problem of learning first-order planning representations from raw perceptions alone without using any prior symbolic knowledge. The ability to construct first-order symbolic representations and using them for expressing, communicating, achieving, and recognizing goals is a main component of human intelligence and a fundamental, open research problem in AI. By addressing and solving this problem, the project can make a difference in how general, explainable, and trustworthy AI can be understood and achieved.
RLeap is partially funded by an ERC Advanced Grant (1 October 2020 - 30 September 2025). Read more about the project here.
Learning Dynamic Algorithms for Automated Planning
Connecting the fields of model-based reasoning and data-driven learning has recently been identified as one of the key research goals in artificial intelligence. Our project will contribute to this endeavor, focusing on the area of automated planning. We will learn heuristic functions that guarantee optimal solutions and planning algorithms that dynamically adapt to the given task.
This project is partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.
General research topics
- Learning representations for planning: the ability to plan, which is crucial in intelligent systems, relies on models that describe how the world and sensors work. These models are usually expressed in declarative languages that make the structure of problems explicit, and support reuse and effective computation. A key open question is how these model representations can be learned automatically. The problem ranges from learning symbolic representations from non-symbolic data, to learning hierarchies of (learned or symbolic) representations supporting planning at different levels of abstraction.
- Planning models, algorithms, and techniques: planning models come in different forms depending on the assumptions about actions, states, and sensing. Classical planning is planning under the assumption of deterministic actions, a full initial state, and goal states to be reached. Other forms of planning like MDP and POMDP planning relax some of these assumptions or address other aspects like continuous state spaces and actions. The challenge is to develop scalable algorithms and techniques for addressing the variety of planning models.
- Planning and reinforcement learning: reinforcement learning (RL) is a generalization of planning where the planning models are not assumed to be known and goals are replaced by rewards to be maximized. In model-based RL, the RL problem is split into two: learning the models and then using them for planning. In model-free RL, a controller is obtained directly from trial and error without the need for learning a model. Some of the biggest AI breakthroughs in recent years have been in Deep RL where the value and policy functions are represented by deep neural networks whose weights are learned by trial and error. The current limitation of these methods is that they require huge amounts of data and that the policy and value functions learned do not generalize well. The use of latent model-representations that are learned from data without supervision is aimed at addressing these limitations and is closely connected with the problem of learning planning representations from data.
- Generalized planning: in the standard planning setting new problems are solved from scratch. In generalized planning, on the other hand, one looks for general plans or policies that provide solutions to many problems from the same domain. For this, suitable formulations, models, and algorithms are needed. Generalized planning provides another angle from which to study the connection between learning and planning, as in reinforcement learning one is also interested in learning things that have some generality and apply to many problem instances.
- Model-based vs. model-free intelligence: the topics of learning, representation, and planning are also at the center of the big split in AI between model-free approaches based on learners, and model-based approaches based on solvers. Truly intelligent systems must involve both, very much like human intelligence, which is often described in terms of a fast reactive System (1) and a slow deliberative System (2), which are tightly integrated (See Daniel Kahneman 2011). For this integration, the models used by solvers such as planners have to be learned automatically. This integration is a key challenge in AI and a central goal for the research group.