Reachability in Petri nets
Petri nets, also known as vector addition systems, are a long established model of concurrency with extensive applications in modelling and analysis of hardware, software and database systems, as well as chemical, biological and business processes. The central algorithmic problem for Petri nets is reachability: whether from the given initial configuration there exists a sequence of valid execution steps that reaches the given final configuration.The complexity of the problem has remained unsettled since the 1960s, and it is one of the most prominent open questions in the theory of verification. Decidability was proved by Mayr in his seminal work in 1981, and the currently best published upper bound is non-primitive recursive Ackermannian of Leroux and Schmitz from 2019. Also in 2019, a non-elementary lower bound has been establish by Czerwiński, Lasota, Lazic, Leroux and Mazowiecki: the reachability problem needs a tower of exponentials of time and space. The new lower bound is a major breakthrough, improving for the first time the previously known exponential space lower bound, due to Lipton in 1976, after more than 40 years of stagnation. The topic of the PhD is to exploit novel ideas underlying this proof, in order to push the lower bound further. In addition some other, potentially more tractable, variants of the problem will be investigated.
Infinite state systems - challenging problems
I will talk about the area of theoretical computer science, which investigates infinite state systems. Very roughly speaking these are infinite graphs, which are represented in some finite way (for example vertices may be pairs of integers and transitions can increase or decrease the numbers by some contants). Such infinite graphs are often useful as models of computation. Basic problems for them however can be very challenging - an example problem, which attracts a lot of attention is the reachability of one vertex from another. I plan to present a set of interesting directions which may be investigated as PhD projects.
Statistical modelling in proteomics
In this talk I present a series of related topics in the mathematical modelling of mass spectrometry data. The presentation opens by a presentation of two algorithms for the generation of the isotopic structure (BRAIN and ISOSPEC). Then I briefly touch the problem of understanding Electron Driven reactions, whose principal aim is to induce ion fragmentation. Here I apply the mathematical theory of reaction kinetics to estimate the reaction rates of the electron transfer reactions. Last but not least I present a breakthrough idea to use mathematical theory of optimal transport for molecule identification.
Machine learning in medicine
Our lab focuses mostly around the development of novel machine learning approaches in application to problems in modern medicine. We are currently working with probabilistic graphical models and various flavors of deep learning methods. We model molecular measurements taken in different medical conditions, such spatial gene expression from pathological tissues, DNA sequences and gene expression from single tumor cells, or composition of drug compounds. In my talk I will shortly describe examples of research projects going on in the lab.
Structural graph theory and algorithms
We present theoretical research placed in the intersection of structural graph theory and algorithm design. The idea is to study different structural and decompositional properties of specific classes of graphs (and other kinds of structures), for instance planarity, sparsity, or tree-likeness, in order to design more efficient algorithms working on such graphs. Conversely, advances in algorithm design often inspire new results in graph theory. This kind of approach is most suited for the design of parameterized algorithms, that is, ones where the complexity is measured using several auxiliary measures, like the quality of a given decomposition. However, the toolbox is applicable also in other paradigms, for instance in approximation algorithms or dynamic data structures. Sometimes, we venture also into the connections with model theory, where we investigate how the combinatorial and algorithmic properties of classes of graphs interplay with the expressive power of various kinds of logic on them. This area of research is supported by ERC grants CUTACOMBS (led by Marcin Pilipczuk) and BOBR (led by Michał Pilipczuk).
Beyond Worst-Case Analysis
There are many cases when some additional properties of the input data allow for better algorithms, e.g., scale free graphs. One of the goal of my project is to deliver new tools to study such structural properties and then work on algorithms that can exploit such additional structure. The standard worst-case analysis is rarely able to predict the actual behaviour of the algorithms, as it is usually overly pessimistic and concentrates on instances that rarely appear in reality. Hence, we need to look for other frameworks that would model the real performance of algorithms, i.e., we need to go "beyond worst-case analysis". In my talk I will highlight some interesting research directions in this area.
New Trends in Text Algorithms
Text algorithms, better known under the name of pattern matching, form a well established area of theoretical computer science. Still, there are plenty of unsolved problems in the area, especially in non-standard settings and models of computation which are currently actively studied. I will present example open problems that my team and myself are planning (or currently trying) to tackle. If you like the Knuth-Morris-Pratt algorithm, the suffix tree, combinatorics on words, classical and less classical data structures, or simply fine-grained complexity of problems in P in general, this project could be of interest to you! In the project I have an open position for a PhD student for the next academic year.
Cryptography for Blockchains
I will give a brief overview of the research topics available within the Cryptography and Blockchain Lab (www.crypto.edu.pl). This will include questions from theoretical computer science (lying in the intersection of cryptography with computational complexity theory and information theory), as well as more practical problems directly related to the real-life applications of cryptography and game theory to blockchain and decentralized finance (DeFi).
We offer PhD scholarships funded from the European Research Council (ERC) Advanced Grant whose PI Stefan Dziembowski.
Recently a new lab was started at MIM UW, which focuses on robot learning, where recent advances in machine learning are applied in the domain of robotics in general, and manipulation and drones in particular. During the talk I will describe ongoing research projects conducted by the group.
Foundations of Interpretable and Reliable Reinforcement Learning
I will present research topic related to study of interpretable and reliable reinforcement learning algorithms with applications to robotics. The focus will be given to the so-called state-planning policy method. I will present some preliminary results obtained in MuJoCo and SafetyGym simulation environments, I will be discussing several related prospective projects related to vision based state planning policy method and offline reinforcement learning.
Games, mechanisms and social networks
Game theory, mechanism design, computational social choice and social networks are examples of areas of research that, in recent years, attracted interest from computer scientists working at the interface of computer science, economics, and artificial intelligence. During the talk I will present my research interest in these areas, briefly present my ongoing and planned research projects and say a few words about the research group I belong to working on topics at the interface of computer science, economics and artificial intelligence.
Large-Scale Data Analysis: Geometric Problems, Probabilistic Solutions
The current age, the age of information, is also an age of big data, or datasets involving billions of data points, with each data point having parameters ranging from a few hundreds to several millions. For instance, the World Wide Web, Wikipedia entries, the medical histories of CoVid patients, or cosmological information from the hundreds of billions of currently known galaxies. This makes the task of data storage and analysis almost impossible for humans to accomplish efficiently, thus giving rise to several computational challenges. In geometric inference and topological data analysis the aim is to represent the data points as lying in some metric or topological space, and use the geometry and the topological features of the underlying space to enable efficient data storage as well as analysis. We shall briefly discuss some aspects of these computational challenges, showing how mathematical frameworks such as persistent homology, VC dimension, dimensionality reduction and the analysis of random structures, help us in addressing these challenges. We'll conclude with a glimpse of the many further questions that lie yet unanswered.
Dependable distributed systems for the Internet of Things
We have grown accustomed to computer systems in our everyday lives. Most of such system in use today are distributed systems: collections of autonomous computing elements coordinating with each other over a network to provide functionality desired by their users. The Internet of Things vision aims to make such systems even more pervasive, by having computing devices embedded into surrounding physical objects – things – which would allow them to collaborate without humans in the loop, thereby making our lives more convenient, healthy, and safe. With such growing reliance on distributed computing systems, a central question becomes: How to make them dependable in the face of possible failures (and misuse)? Dependability covers among others availability, safety, reliability, maintainability, and security. My research group investigates these issues by following an experimental approach: we formulate basic problems, implement prototype systems solving such problems, and experiment with them in the real world to validate our hypotheses.
Optimization in Scheduling for Supercomputers and Datacenters
Modern large-scale computational resources - supercomputers and data centers (the backbone of the cloud computing) - are a crucial element of our increasingly digital world. This infrastructure hosts services we all use daily, such as on-line maps, weather forecasts or popular web sites; but it also acts as a primary research instrument running numerical models in computational biology, chemistry or physics. These platforms are massively parallel, composed of thousands of individual machines and shared by tens to thousands of jobs at any given time. A key to their efficiency is the scheduler: the software allocating jobs to machines. Scheduler policies are currently a mixture of optimization, heuristics and rules of thumb, which obviously is neither a scientific nor an optimal approach. The principal objective of the project is to develop theoretically-sound and practically-relevant results on how a scheduler should manage the new aspects of modern infrastructure such as: performance degradation resulting from colocating many jobs on a single node in data-centers; jobs with ; or supercomputers with storage hierarchies (burst buffers).