Archiwum seminariów

09.02.43535
Marta Kwiatkowska
University of Oxford
Informatyka Teoretyczna
Strategy synthesis for partially observable stochastic games with neural perception mechanisms

Strategic reasoning is essential to ensure stable multi-agent coordination in complex environments, as it enables synthesis of optimal (or near-optimal) agent strategies and equilibria that guarantee expected outcomes, even in adversarial scenarios. Partially-observable stochastic games (POSGs) are a natural model for real-world settings involving multiple agents, uncertainty and partial information, but lack practical algorithms for computing or approximating optimal values and strategies. Recently, progress has been made for one-sided POSGs, a subclass of two-agent, zero-sum POSGs where only one agent has partial information while the other agent is assumed to have full knowledge of the state, with heuristic search value iteration (HSVI) proposed for computing approximately optimal values and strategies in one-sided POSGs.

However, many realistic autonomous coordination scenarios involve agents perceiving continuous environments using data-driven observation functions, typically implemented as neural networks (NNs). Such perception mechanisms bring new challenges, notably continuous environments, which are inherently tied to NN-enabled perception because of standard training regimes.

This talk will discuss progress with developing a model class and algorithms for one-sided POSGs with neural perception mechanisms that work directly with their continuous state space. Building on continuous-state POMDPs with NN perception mechanisms, where the key idea is that ReLU neural network classifiers induce a finite decomposition of the continuous environment into polyhedra for each classification label, a piecewise constant representation for the value, reward and perception functions is developed that forms the basis for a variant of HSVI, a point-based solution method that computes a lower and upper bound on the value function from a given belief to compute an (approximately) optimal strategy. We extend these ideas to zero-sum POSGs, which involves solving a normal form game at each stage and iteration, and goes significantly beyond HSVI for finite POSGs.

Based on an invited lecture at CSL 2024
http://fun2model.org/papers/kwi24.pdf

 

21.10.81865
Alexander Wolff
Universität Würzburg
Informatyka Teoretyczna
Eliminating Crossings in Ordered Graphs

Drawing a graph in the plane with as few crossings as possible is one of the central problems in graph drawing and computational geometry. Another option is to remove the smallest number of vertices or edges such that the remaining graph can be drawn without crossings. We study both problems in a book-embedding setting for ordered graphs, that is, graphs with a fixed vertex order. In this setting, the vertices lie on a straight line, called the spine, in the given order, and each edge must be drawn on one of several pages of a book such that every edge has at most a fixed number of crossings. In book embeddings, there is another way to reduce or avoid crossings; namely by using more pages. The minimum number of pages needed to draw an ordered graph without any crossings is its (fixed-vertex-order) page number.

We show that the page number of an ordered graph with n vertices and m edges can be computed in 2m⋅poly(n) time. An O(log n)-approximation of this number can be computed efficiently. We can decide in 2O(d√k⋅log(d+k))⋅poly(n) time whether it suffices to delete k edges of an ordered graph to obtain a d-planar layout (where every edge crosses at most d other edges) on one page. As an additional parameter, we consider the size h of a hitting set, that is, a set of points on the spine such that every edge, seen as an open interval, contains at least one of the points. For h=1, we can efficiently compute the minimum number of edges whose deletion yields fixed-vertex-order page number p. For h>1, we give an XP algorithm with respect to h+p. Finally, we consider spine+t-track drawings, where some but not all vertices lie on the spine. The vertex order on the spine is given; we must map every vertex that does not lie on the spine to one of t tracks, each of which is a straight line on a separate page, parallel to the spine.

Joint work with Akanksha Agrawal, Sergio Cabello, Michael Kaufmann, Saket Saurabh, Roohani Sharma, and Yushi Uno.

https://arxiv.org/abs/2404.09771

07.12.21630
Michał Seweryn
Université Libre de Bruxelles
Informatyka Teoretyczna
Recent results in graph product structure theory

Graph product structure theory describes complicated graphs as subgraphs of strong products of simpler building blocks. Usually, the strong product involves three graphs: a graph of bounded treewidth, a small complete graph, and a path. For example, Dujmović et al. showed that every planar graph is a subgraph of the strong product of a treewidth-3 graph, a complete graph on three vertices, and a path. This theorem has been the key to solving many long-standing problems about planar graphs, and is arguably the most important result of the graph product structure theory.

In my talk I will discuss some of the recent results in this field. First I will discuss two generalizations of the product structure theorem for planar graphs to k-planar graphs and k-powers of planar graphs with bounded degree. The distinguishing property of these theorems is that the bound on the treewidth in the product is an absolute constant independent of k and the maximum degree. Then, I will discuss some product structure theorems, where an n-vertex graph is a subgraph of the strong product of two graphs: a graph of constant treewidth, and a complete graph on O(n) vertices. These theorems are qualitative strengthenings of √n-separator theorems by Lipton-Tarjan and Alon-Seymour-Thomas.  

Joint works with Marc Distel, Vida Dujmović, David Eppstein, Robert Hickingbotham, Gwenaël Joret, Piotr Micek, Pat Morin, and David Wood

13.08.87340
Ola Svensson
École Polytechnique Fédérale de Lausanne
Informatyka Teoretyczna
The Price of Explainability for Clustering
Given a set of points in d-dimensional space, an explainable clustering is one where the clusters are specified by a tree of axis-aligned threshold cuts. Dasgupta et al. (ICML 2020) posed the question of the price of explainability: the worst-case ratio between the cost of the best explainable clusterings to that of the best clusterings.
 
We show that the price of explainability for k-medians is at most 1+Hk−1; in fact, we show that the popular Random Thresholds algorithm has exactly this price of explainability, matching the known lower bound constructions. We complement our tight analysis of this particular algorithm by constructing instances where the price of explainability (using any algorithm) is at least (1−o(1))·ln k, showing that our result is best possible, up to lower-order terms. We also improve the price of explainability for the k-means problem to O(k·lnln k) from the previous O(k·ln k), considerably closing the gap to the lower bounds of Ω(k).
 
Finally, we study the algorithmic question of finding the best explainable clustering: We show that explainable k-medians and k-means cannot be approximated better than O(ln k), under standard complexity-theoretic conjectures. This essentially settles the approximability of explainable k-medians and leaves open the intriguing possibility to get significantly better approximation algorithms for k-means than its price of explainability.
 
This is joint work with Anupam Gupta, Madhusudhan Reddy Pittu, and Rachel Yuan
20.04.35320
Ruta Mehta
University of Illinois at Urbana-Champaign
Informatyka Teoretyczna
Competitive division of goods, bads, and mixed: existence, computation, and complexity

Fair division is the problem of allocating a set of items among agents in a fair and efficient manner. This age-old problem, mentioned even in the Bible, arises naturally in a wide range of real-life settings, for example, school seat assignments, partnership dissolution, sharing of satellites, and dividing costs for climate resilience. Division based on competitive equilibrium (CE) has emerged as one of the best mechanisms for this problem. The existence and computability of CE have been extensively studied when all items are disposable goods, while the problem is less explored when some of them are non-disposable chores (bads). In this talk, I will discuss recent algorithmic advances on the computation of CE when each item may be a good, a chore, or both (mixed).

I will first consider the case of additive valuations, where when all items are goods, the CE set is well-known to be captured by convex programming formulations and thereby forms a convex set. In sharp contrast, with chores, the CE set may be nonconvex and disconnected. I will discuss how to handle this non-convexity through a new exterior-point method to find an approximate CE in polynomial time (FPTAS). This method seems general enough to work with any mathematical formulation that optimizes a coordinate-wise monotone function over linear constraints. Finally, I will discuss recent developments on the exchange setting (barter system) on existence and computational complexity.

Based on joint works with Shant Boodaghians, Bhaskar Ray Chaudhury, Jugal Garg, and Peter McGlaughlin.

29.03.5203
Mikkel Thorup
University of Copenhagen
Informatyka Teoretyczna
Reconstructing the Tree of Life (Fitting Distances by Tree Metrics)

We consider the numerical taxonomy problem of fitting an SxS distance matrix D with a tree metric T. Here T is a weighted tree spanning S where the path lengths in T induce a metric on S. If there is a tree metric matching D exactly, then it is easily found. If there is no exact match, then for some k, we want to minimize the Lk norm of the errors, that is, pick T so as to minimize ||D-T||k = (Σi,jϵS |D(i,j)-T(i,j)|k) 1/k.

An evolutionary tree induces a hierarchical classification of species and this is not just tied to biology. Medicine, ecology and linguistics are just some of the fields where this concept appears, and it is even an integral part of machine learning and data science. Fundamentally, if we can approximate distances with a tree, then they are much easier to reason about: many questions that are NP-hard for general metrics can be answered in linear time on tree metrics. In fact, humans have appreciated hierarchical classifications at least since Plato and Aristotle (350 BC).

The numerical taxonomy problem is important in practice and many heuristics have been proposed. In this talk we will review the basic algorithmic theory, results and techniques, for the problem, including the most recent result from FOCS'21 [Vincent Cohen-Addad et al., 2021]. They paint a varied landscape with big differences between different moments, and with some very nice open problems remaining.

- At STOC'93, Farach, Kannan, and Warnow [Martin Farach et al., 1995] proved that under L, we can find the optimal ultrametric. Almost all other variants of the problem are APX-hard

- At SODA'96, Agarwala, Bafna, Farach, Paterson, and Thorup [Richa Agarwala et al., 1999] showed that for any norm Lk, k1, if the best ultrametric can be α-approximated, then the best tree metric can be 3α-approximated. In particular, this implied a 3-approximation for tree metrics under L∞.

- At FOCS'05, Ailon and Charikar [Nir Ailon and Moses Charikar, 2011] showed that for any Lk, k1, we can get an approximation factor of O(((log n)(log log n))1/k) for both tree and ultrametrics. Their paper was focused on the L1 norm, and they wrote "Determining whether an O(1) approximation can be obtained is a fascinating question".

- At FOCS'21, Cohen-Addad, Das, Kipouridis, Parotsidis, and Thorup [Vincent Cohen-Addad et al., 2021] showed that indeed a constant factor is possible for L1 for both tree and ultrametrics. This uses the special structure of L1 in relation to hierarchies.

- The status of Lk is wide open for 1<k<∞. All we know is that the approximation factor is between Ω(1) and O((log n)(log log n)).

13.11.51637
Roman Madej
Podstawy Informatyki
Modular Construction of Fixed Point Combinators and Clocked Bohm Trees by Jorg Endrullis, Dimitri Hendriks and Jan Willem Klop

Fixed point combinators (and their generalization: looping combinators) are classic notions belonging to the heart of λ-calculus and logic. We start with an exploration of the structure of fixed point combinators (fpc’s), vastly generalizing the well-known fact that if Y is an fpc, Y (SI) is again an fpc, generating the B ̈ohm sequence of fpc’s. Using the infinitary λ-calculus we devise infinitely many other generation schemes for fpc’s. In this way we find schemes and building blocks to construct new fpc’s in a modular way. Having created a plethora of new fixed point combinators, the task is to prove that they are indeed new. That is, we have to prove their β-inconvertibility. Known techniques via B ̈ohm Trees do not apply, because all fpc’s have the same Bohm Tree (BT). Therefore, we employ ‘clocked BT’s’, with annotations that convey information of the tempo in which the data in the BT are produced. BT’s are thus enriched with an intrinsic clock behaviour, leading to a refined discrimination method for λ-terms. The corresponding equality is strictly intermediate between =β and =BT, the equality in the classical models of λ-calculus. An analogous approach pertains to L ́evy–Longo and Berarducci trees. Finally, we increase the discrimination power by a precision of the clock notion that we call ‘atomic clock’.


The theory of sage birds (technically called fixed point combinators) is a fascinating and basic part of combinatory logic; we have only scratched the surface.

08.07.32472
Rafał Loska
Podstawy Informatyki
Strict monotonic trees arising from evolutionary processes: combinatorial and probabilistic study by Olivier Bodini, Antoine Genitrini, Cécile Mailler and Mehdi Naima
In this paper we study two models of labelled random trees that generalise the original unlabelled Schroder tree. Our new models can be seen as models for phylogenetic trees in which nodes represent species and labels encode the order of appearance of these species, and thus the chronology of evolution. One important feature of our trees is that they can be generated
efficiently thanks to a dynamical, recursive construction. Our first model is an increasing tree in the classical sense (labels increase along each branch of the tree and each label appears only once). To better model phylogenetic trees, we relax the rules of labelling by allowing repetitions in the second model. For each of the two models, we provide asymptotic theorems for different characteristics of the tree (e.g. degree of the root, degree distribution, height, etc), thus giving extensive information about the typical shapes of these trees. We also provide efficient algorithms to generate large trees efficiently in the two models. The proofs are based on a combination of analytic combinatorics, probabilistic methods, and bijective methods (we exhibit bijections between our models and well-known models of the literature such as permutations and Stirling numbers of both kinds). It turns out that even though our models are labelled, they can be specified simply in the world of ordinary generating functions. However, the resulting generating functions will be formal. Then, by applying Borel transforms the models will be amenable to techniques of analytic combinatorics.
04.03.13307
Sebastain Spyrzewski
Podstawy Informatyki
A characterization of lambda-terms transforming numerals by PAWEŁ PARYS
It is well known that simply typed λ-terms can be used to represent numbers, as well as some other data types. We show that λ-terms of each fixed (but possibly very complicated) type can be described by a finite piece of information (a set of appropriately defined intersection types) and by a vector of natural numbers. On the one hand, the description is compositional:
having only the finite piece of information for two closed λ-terms M and N, we can determine its counterpart for M N, and a linear transformation that applied to the vectors of numbers for M and N gives us the vector for M N. On the other hand, when a λ-term represents a natural number, then this number is approximated by a number in the vector corresponding to this λ-term. As a consequence, we prove that in a λ-term of a fixed type, we can store only a fixed number of natural numbers, in such a way that they can be extracted using λ-terms. More precisely, while representing k numbers in a closed λ-term of some type, we
only require that there are k closed λ-terms M1, . . . , M k such that M i takes as argument the λ-term representing the k-tuple, and returns the i-th number in the tuple (we do not require that, using λ-calculus, one can construct the representation of the k-tuple out of the k numbers in the tuple). Moreover, the same result holds when we allow that the numbers can be extracted approximately, up to some error (even when we only want to know whether a set is bounded or not). All the results remain true when we allow the Y combinator (recursion) in our λ-terms, as well as uninterpreted constants.
20.04.76384
Bartłomiej Bosek
Optymalizacja Kombinatoryczna
On a Problem of Steinhaus

Let N be a positive integer. A sequence X=(x1,x2,…,xN) of points in the unit interval [0,1) is piercing if {x1,x2,…,xn}∩[i/n,(i+1)/n)≠∅ holds for every n=1,2,…,N and every i=0,1,…,n−1. In 1958 Steinhaus asked whether piercing sequences can be arbitrarily long. A negative answer was provided by Schinzel, who proved that any such sequence may have at most 74 elements. This was later improved to the best possible value of 17 by Warmus, and independently by Berlekamp and Graham. We study a more general variant of piercing sequences. Let f(n)≥n be an infinite nondecreasing sequence of positive integers. A sequence X=(x1,x2,…,xf(N)) is f-piercing if {x1,x2,…,xf(n)}∩[i/n,(i+1)/n)≠∅ holds for every n=1,2,…,N and every i=0,1,…,n−1. A special case of f(n)=n+d, with d a fixed nonnegative integer, was studied by Berlekamp and Graham. They noticed that for each d≥0, the maximum length of any (n+d)-piercing sequence is finite. Expressing this maximum length as s(d)+d, they obtained an exponential upper bound on the function s(d), which was later improved to s(d)=O(d3) by Graham and Levy. Recently, Konyagin proved that 2d⩽s(d)<200d holds for all sufficiently big d. Using a different technique based on the Farey fractions and stick-breaking games, we prove here that the function s(d) satisfies ⌊c1d⌋⩽s(d)⩽c2d+o(d), where c1=ln2/(1−ln2)≈2.25 and c2=(1+ln2)/(1−ln2)≈5.52. We also prove that there exists an infinite f-piercing sequence with f(n)=γn+o(n) if and only if γ≥1/ln2≈1.44. This is joint work with Marcin Anholcer, Jarosław Grytczuk, Grzegorz Gutowski, Jakub Przybyło, Rafał Pyzik, and Mariusz Zając.

  1. Marcin Anholcer, Bartłomiej Bosek, Jarosław Grytczuk, Grzegorz Gutowski, Jakub Przybyło, Rafał Pyzik, Mariusz Zając. On a Problem of Steinhaus. arXiv:2111.01887. (2021).
08.08.38053
Bartłomiej Bosek
Optymalizacja Kombinatoryczna
Recoloring Unit Interval Graphs with Logarithmic Recourse Budget

We study the problem of coloring a unit interval graph that changes dynamically. In our model the unit intervals are added or removed one at a time and have to be colored immediately so that no two overlapping intervals share the same color. After each update, only a limited number of intervals are allowed to be recolored. The limit on the number of recolorings per update is called the recourse budget. In this paper, we show, that if the graph remains k-colorable at all times, and the updates consist of insertions only, then we can achieve the amortized recourse budget of O(k7logn) while maintaining a proper coloring with k colors. This is an exponential improvement over the result in [Bosek et al., Recoloring Interval Graphs with Limited Recourse Budget. SWAT 2020] in terms of both k and n. We complement this result by showing the lower bound of Ω(n) on the amortized recourse budget in the fully dynamic setting. Our incremental algorithm can be efficiently implemented. As a byproduct of independent interest, we include a new result on coloring proper circular-arc graphs. Let L be the maximum number of arcs intersecting in one point for some set of unit circular arcs A. We show that if there is a set A′ of non-intersecting unit arcs of size L2−1 such that A∪A′ does not contain L+1 arcs intersecting in one point, then it is possible to color A with L colors. This complements the work on unit circular arc coloring, which specifies sufficient conditions needed to color A with L+1 colors or more. This is joint work with Anna Zych-Pawlewicz.

  1. Bartłomiej Bosek, Anna Zych-Pawlewicz. Recoloring Unit Interval Graphs with Logarithmic Recourse Budget. arXiv:2202.08006. (2022).
05.03.70912
Szymon Toruńczyk
University of Warsaw
Informatyka Teoretyczna
Ordered graphs of bounded twin-width and monadically NIP graph classes

We establish a list of characterizations of bounded twin-width for hereditary classes of totally ordered graphs: as classes of at most exponential growth studied in enumerative combinatorics, as monadically NIP classes studied in model theory, as classes that do not transduce the class of all graphs studied in finite model theory, and as classes for which model checking first-order logic is fixed-parameter tractable studied in algorithmic graph theory.

This has several consequences.First, it allows us to show that every hereditary class of ordered graphs either has at most exponential growth, or has at least factorial growth. This settles a question first asked by Balogh, Bollobás, and Morris [Eur. J. Comb. '06] on the growth of hereditary classes of ordered graphs, generalizing the Stanley-Wilf conjecture/Marcus-Tardos theorem. Second, it gives a fixed-parameter approximation algorithm for twin-width on ordered graphs. Third, it yields a full classification of fixed-parameter tractable first-order model checking on hereditary classes of ordered binary structures. Fourth, it provides a model-theoretic characterization of classes with bounded twin-width.

Those results are joint work with Bonnet, Giocanti, Ossona de Mendez, Simon, Thomasse, accepted to STOC'22.

Time permitting, I will also discuss the more general landscape of monadically NIP graph classes, generalizing both nowhere dense classes and classes of bounded twin-width.

22.12.54429
Andrii Orap, Maksym Zub

Grundy Distinguishes Treewidth from Pathwidth

Strukturalne parametry grafów, takie jak treewidth, pathwidth i clique-width, są głównym tematem badań sparametryzowanej złożoności. Głównym celem badań w tej dziedzinie jest zrozumienie „ceny ogólności” tych szerokości: kiedy przechodzimy od pojęć bardziej restrykcyjnych do bardziej ogólnych, jakie są problemy, w których złożoność pogarsza się z fixed-parameter tractable do intractable? Ten rodzaj pytania jest już bardzo dobrze zbadany, ale, co jest dość uderzające, algorytmiczna granica między dwoma (prawdopodobnie) najbardziej centralnymi pojęciami szerokości (notacjami), treewidth i pathwidth, jest nadal niezrozumiała: obecnie nie jest znany żaden naturalny problem na grafie, który byłby W-trudny dla jednego, ale FPT dla drugiego. Rzeczywiście, zaskakującym rozwojem ostatnich kilku lat była obserwacja, że: w przypadku wielu najbardziej paradygmatycznych problemów ich złożoność dla tych dwóch parametrów w rzeczywistości dokładnie się pokrywają, pomimo faktu, że szerokość drzewa jest parametrem o wiele bardziej ogólnym. W ten sposób wydaje się, że dodatkowa ogólność szerokości drzewa nad szerokością ścieżki często przychodzi „za darmo”.

Naszym głównym wkładem w ten artykuł jest odkrycie pierwszego naturalnego przykładu, w którym ta ogólność ma wysoką cenę. Rozważamy Grundy Coloring, wariację kolorowania, w której próbujemy obliczyć najgorsze możliwe kolorowanie, które można przypisać do grafu przez zachłanny algorytm First-Fit . Pokazujemy, że ten dobrze zbadany problem jest parametryzowany (FPT) przez pathwidth; jednakże to staje się znacznie trudniejsze (W[1]-hard), gdy jest sparametryzowany przez treewidth. Ponadto pokazujemy, że Grundy Coloring sprawia, że jest drugi skok złożoności dla bardziej ogólnych szerokości, gdy staje się para-NP-hard dla clique-width. Dlatego Grundy Coloring ładnie oddaje złożoność kompromisów między trzema najlepiej zbadanymi parametrami. Uzupełniając obraz, pokazujemy, że Grundy Coloring jest parametryzowane przez FPT według modular-width.

23.04.51637
Jan Koscisz
Podstawy Informatyki
Functions-as-constructors higher-order unification: extended pattern unification by Tomer Libal and Dale Miller
Unification is a central operation in constructing a range of computational logic systems based on first-order and higher-order logics. First-order unification has several properties that guide its incorporation in such systems. In particular, first-order unification is decidable, unary, and can be performed on untyped term structures. None of these three properties hold for full higher-order unification: unification is undecidable, unifiers can be incomparable, and term-level typing can dominate the search for unifiers. The so-called pattern subset of higher-order unification was designed to be a small extension to first-order unification that respects the laws governing λ-binding (i.e., the equalities for α, β, and η-conversion) but which also satisfied those three properties. While the pattern fragment of higher-order unification has been used in numerous implemented systems and in various theoretical settings, it is too weak for many applications. This paper defines an extension of pattern unification that should make it more generally applicable, especially in proof assistants that allow for higher-order functions. This extension’s main idea is that the arguments to a higher-order, free variable can be more than just distinct bound variables. In particular, such arguments can be terms constructed from (sufficient numbers of) such bound variables using term constructors and where no argument is a subterm of any other argument. We show that this extension to pattern unification satisfies the three properties mentioned above.
28.10.32471
Roch Wójtowicz
Podstawy Informatyki
SHARP THRESHOLDS OF GRAPH PROPERTIES, AND THE k-SAT PROBLEM by EHUD FRIEDGUT AND AN APPENDIX BY JEAN BOURGAIN

Consider G(n, p) to be the probability space of random graphs on n vertices with edge probability p. We will be considering subsets of this space defined by monotone graph properties. A monotone graph property P is a property of graphs such that


a) P is invariant under graph automorphisims.
b) If graph H has property P , then so does any graph G having H as a subgraph.
 

A monotone symmetric family of graphs is a family defined by such a property. One of the first observations made about random graphs by Erdos and Renyi in their seminal work on random graph theory [12] was the existence of threshold phenomena, the fact that for many interesting properties P , the probability of P appearing in G(n, p) exhibits a sharp increase at a certain critical value of the parameter p. Bollob ́as and Thomason proved the existence of threshold functions for all monotone set properties ([6]), and in [14] it is shown that this behavior is quite general, and that all monotone graph properties exhibit threshold behavior, i.e. the probability of their appearance increases from values very close to 0 to values close to 1 in a very small interval. More precise analysis of the size of the threshold interval is done in [7]. This threshold behavior which occurs in various settings which arise in combinatorics and computer science is an instance of the phenomenon of phase transitions which is the subject of much interest in statistical physics. One of the main questions that arises in studying phase transitions is: how “sharp” is the transition? For example, one of the motivations for this paper arose from the question of the sharpness of the phase transition for the property of satisfiability of a random k-CNF Boolean formula. Nati Linial, who introduced me to this problem, suggested that although much concrete analysis was being performed on this problem the best approach would be to find general conditions for sharpness of the phase transition, answering the question posed in [14] as to the relation between the length of the threshold interval and the value of the critical probability.

23.05.76278
Roch Wójtowicz
Podstawy Informatyki
SEMINARIUM i WYSTĄPIENIE ROCHA WÓJTOWICZA PRZENIESINE NA 11.05.2022

Consider G(n, p) to be the probability space of random graphs on n vertices with edge probability p. We will be considering subsets of this space defined by monotone graph properties. A monotone graph property P is a property of graphs such that


a) P is invariant under graph automorphisims.
b) If graph H has property P , then so does any graph G having H as a subgraph.
 

A monotone symmetric family of graphs is a family defined by such a property. One of the first observations made about random graphs by Erdos and Renyi in their seminal work on random graph theory [12] was the existence of threshold phenomena, the fact that for many interesting properties P , the probability of P appearing in G(n, p) exhibits a sharp increase at a certain critical value of the parameter p. Bollob ́as and Thomason proved the existence of threshold functions for all monotone set properties ([6]), and in [14] it is shown that this behavior is quite general, and that all monotone graph properties exhibit threshold behavior, i.e. the probability of their appearance increases from values very close to 0 to values close to 1 in a very small interval. More precise analysis of the size of the threshold interval is done in [7]. This threshold behavior which occurs in various settings which arise in combinatorics and computer science is an instance of the phenomenon of phase transitions which is the subject of much interest in statistical physics. One of the main questions that arises in studying phase transitions is: how “sharp” is the transition? For example, one of the motivations for this paper arose from the question of the sharpness of the phase transition for the property of satisfiability of a random k-CNF Boolean formula. Nati Linial, who introduced me to this problem, suggested that although much concrete analysis was being performed on this problem the best approach would be to find general conditions for sharpness of the phase transition, answering the question posed in [14] as to the relation between the length of the threshold interval and the value of the critical probability.

18.07.84601
Raphael Steiner
ETH Zürich
Informatyka Teoretyczna
New bounds for relatives of Hadwiger's conjecture

In this talk, I will present some recent results on two variants of Hadwiger's conjecture.

             First, I will discuss the so-called Odd Hadwiger's conjecture, a strengthening of Hadwiger's conjecture proposed by Gerards and Seymour in 1995. First I will show how, using a simple new trick, one can reduce the problem of coloring graphs with no odd Kt-minor to coloring graphs with no Kt-minor up to a constant factor of 2, thereby improving previous upper bounds for this problem. Then, I will outline how a similar idea can be used to significantly improve the known bounds for clustered colorings of odd Kt-minor free graphs, in which we look for (possibly improper) colorings with monochromatic components of small size.

            Second,  I will discuss the so-called List Hadwiger's conjecture,  which states that there exists a constant c such that every graph with no Kt-minor is ct-choosable (i.e.,  list colorable).   I will show a probabilistic construction of a new lower bound  2t-o(t)  for list coloring  Kt-minor free graphs,  this refutes a conjecture by Kawarabayashi and Mohar which stated that one can take c=3/2.  I will then show how some well-chosen modifications to our construction can be used to prove lower bounds also for list coloring  H-minor free graphs where  H  is non-complete. In particular, I will show that   Ks,t-minor free graphs  for large comparable  s  and  t  can have list chromatic number  at least  (1-o(1))(s+t+min(s,t)),  this refutes a 2001 conjecture by Woodall.

12.03.65436
Mathieu Mari
University of Warsaw and IDEAS-NCBR
Informatyka Teoretyczna
A (2+ε)-Approximation Algorithm for Maximum Independent Set of Rectangles

We study the Maximum Independent Set of Rectangles (MISR) problem, where we are given a set of axis-parallel rectangles in the plane and the goal is to select a subset of non-overlapping rectangles of maximum cardinality. In a recent breakthrough, Mitchell [2021] obtained the first constant-factor approximation algorithm for MISR. His algorithm achieves an approximation ratio of 10 and it is based on a dynamic program that intuitively recursively partitions the input plane into special polygons called corner-clipped rectangles (CCRs), without intersecting certain special horizontal line segments called fences.

In this talk, I will present a (2+ϵ)-approximation algorithm for MISR which is also based on a recursive partitioning scheme. First, we use a partition into a class of axis-parallel polygons with constant complexity each that are more general than CCRs. This allows us to provide an arguably simpler analysis and at the same time already improves the approximation ratio to 4. Then, using a more elaborate charging scheme and a recursive partitioning into general axis-parallel polygons with constant complexity, we improve our approximation ratio to 2+ϵ. In particular, we construct a recursive partitioning based on more general fences which can be sequences of up to O(1/ϵ) line segments each. This partitioning routine and our other new ideas may be useful for future work towards a PTAS for MISR.

At the end of the talk, I will present a bunch of open questions related to the problem.

This is a joint work with Waldo Gálvez, Arindam Khan, Tobias Mömke, Madhusudhan Reddy and Andreas Wiese
29.12.48953
Ignacy Buczek, Michał Woźny

Sorting Balls and Water: Equivalence and Computational Complexity

Problemy sortowania od długiego czasu są obiektem różnego rodzaju badań. Ostatnio dwie gry na telefon w tematyce sortowania zyskały na popularności. W tych grach, gracz ma do dyspozycji urny wypełnione kolorowymi obiektami (w przypadku jednej są to kule, a w przypadku drugiej woda) oraz kilka pustych urn, a jego celem jest posortowanie obiektów zgodnie z kolorami. W jednym ruchu może on przenieść obiekty z jednej urny do drugiej, jeżeli kolor przenoszonych obiektów zgadza się z kolorem najwyższego obiektu docelowej urny lub urna ta jest pusta.

W pracy autorzy badają złożoność obliczeniową tych łamigłówek. Na początku pokazują, że te gry są w równoważne pod kątem rozwiązywalności. Dokładniej mówiąc, rozwiązywalność stanu początkowego gry nie zależy od tego czy obiekty zachowują się jak kule, czy jak woda. Dowodzą również, że dla każdej tak-instancji istnieje rozwiązanie wielomianowego rozmiaru, co pokazuje, że problem rozwiązywalności tych łamigłówek jest w NP. Następnie uzasadniają, że ten problem jest NP-zupełny. Znajdują również wielomianowe algorytmy dla szczególnych przypadków. Na samym końcu zastanawiają się, jak wiele pustych urn jest potrzebnych, aby instancja była rozwiązywalna niezależnie od początkowego rozmieszczenia obiektów. Pokazują nietrywialne ograniczenia (dolne i górne) zależne od ilości początkowo zapełnionych urn i ich pojemności.

24.02.7940
Jarosław Byrka
University of Wrocław
Informatyka Teoretyczna
Online Facility Location with Linear Delay

We study the problem of online facility location with delay. In this problem, a sequence of n clients appear in the metric space, and they need to be eventually connected to some open facility. The clients do not have to be connected immediately, but such a choice comes with a penalty: each client incurs a waiting cost (the difference between its arrival and connection time). At any point in time, an algorithm may decide to open a facility and connect any subset of clients to it. This is a well-studied problem both of its own, and within the general class of network design problems with delays.
Our main focus is on a new variant of this problem, where clients may be connected also to an already open facility, but such action incurs an extra cost: an algorithm pays for waiting of the facility (a cost incurred separately for each such "late" connection). This is reminiscent of online matching with delays, where both sides of the connection incur a waiting cost. We call this variant two-sided delay to differentiate it from the previously studied one-sided delay.
We present an O(1)-competitive deterministic algorithm for the two-sided delay variant. On the technical side, we study a greedy strategy, which grows budgets with increasing waiting delays and opens facilities for subsets of clients once sums of these budgets reach certain thresholds. Our technique is a substantial extension of the approach used by Jain, Mahdian and Saberi [STOC 2002] for analyzing the performance of offline algorithms for facility location.
We then show how to transform our O(1)-competitive algorithm for the two-sided delay variant to O(logn/loglogn)-competitive deterministic algorithm for one-sided delays. We note that all previous online algorithms for problems with delays in general metrics have at least logarithmic ratios.

Joint work with Marcin Bienkowski, Martin Böhm and Jan Marcinkowski

09.10.68173
Jan Derbisz
Informatyka Teoretyczna
Circular-arc graphs and the Helly property

Circular-arc graphs, defined as the intersection graphs of arcs of a fixed circle, are probably one of the simplest classes of intersection graphs, which does not satisfy the Helly property in general (i.e. there are circular-arc graphs in which some cliques can be represented by arcs whose common intersection is empty). In particular, some cliques of a circular-arc G graph may satisfy the Helly property in some but not all circular-arc representations of G.

In the Helly Clique Problem, for a given circular-arc graph G and some of its cliques C1,...,Ck (not necessarily maximal in G) one needs to decide whether there exists a circular-arc representation of G in which all the cliques C1,...,Ck satisfy the Helly property. We know that the Helly Clique Problem is NP-complete and, under the ETH, it can not be solved in time 2o(k)poly(n), where n is the number of vertices of G (Ağaoğlu et al.).

In the talk we will consider the Helly Clique Problem in the context of parameterized complexity, where the natural parameter is the number of cliques given in the input. We will show that the Helly Clique Problem:
- admits an algorithm with running time 2O(k log k)poly(n) (and hence it belongs to FPT),
- admits a polynomial kernel of size O(k6).

Moreover, we will discuss the relation of the Helly Clique Problem with the recognition problems of so-called H-graphs; in particular, in the context of those graphs H for which the recognition problem remains open.

This is joint work with T. Krawczyk. The talk also includes joint work with D. Ağaoğlu, O. Cagrici, T. Hartmann, P. Hliněný, J. Kratochvíl, T. Krawczyk, and P. Zeman.

28.07.51691
Aleksander Katan, Roch Wójtowicz

Filling crosswords is very hard

Autorzy analizują problem wypełniania krzyżówek, który już był rozważany na przykład przez Garey’a i Johnsona w ich książce „Computers and Intractability: A Guide to the Theory of NP-Completeness”. W problemie tym dostajemy m słów i n poziomych lub pionowych slotów (rubryk) oraz jesteśmy proszeni o wypełnienie ich tak, by przecięcia slotów się zgadzały. Autorzy próbują wskazać źródło trudności tej łamigłówki przyglądając się strukturze grafu przecięć slotów. Skupiają się na przypadku, gdy struktura tego grafu przypomina drzewo. Niestety, jeżeli przyjmiemy, że słowa nie mogą być używane wielokrotnie, okazuje się, że problem pozostaje NP-trudny nawet pod bardzo surowymi restrykcjami, jak na przykład, że graf przecięć jest sumą grafów typu star i alfabet ma rozmiar
2, lub że jest matchingiem (krzyżówka składa się z samych rozłącznych krzyży) z alfabetem rozmiaru 3. Zapytanie staje się prostsze, gdy pozwolimy na powtarzanie słów, gdyż autorom udaje się skonstruować algorytm działający w czasie mtw, gdzie tw oznacza treewitdh (szerokość drzewiastą) grafu. Jednak nawet wtedy nie można algorytmu poprawić, by otrzymać FPT. Co więcej, pod założeniem ETH, problem nie może być rozwiązany w czasie mo(k), gdzie k to liczba poziomych slotów instancji problemu.
Zmotywowani tymi przykrymi wynikami, autorzy rozważają bardziej ograniczony przypadek, gdzie parametryzujemy się liczbą slotów wszystkich, a nie tylko poziomych. Problem staje się FPT (o ile alfabet jest stałego rozmiaru), ale złożoność jest wykładnicza od n2. Wykazują, że pod założeniem ETH, nie istnieje algorytm działający w czasie 2o(n^2), nawet dla dwuznakowego alfabetu. Pod koniec pracy rozważana jest wersja optymalizacyjna, w której staramy się wypełnić jak najwięcej pól. Łatwo jest wtedy uzyskać 1⁄2-aproksymację, nawet dla przypadku ważonego, wystarczy po prostu brać pod uwagę tylko pionowe albo tylko poziome sloty. Ten algorytm jest prawdopodobnie optymalny, gdyż istnienie lepszej aproksymacji działającej w wielomianowym czasie przeczyłoby Unique Games Conjecture. Ostatnie dwa wyniki są prawdziwe niezależnie, czy rozpatrujemy przypadek z powtarzaniem słów czy bez.

22.09.10677
Torsten Mütze
University of Warwick & Charles University
Informatyka Teoretyczna
Efficient generation of elimination trees and Hamilton paths on graph associahedra
An elimination tree for a connected graph G is a rooted tree on the vertices of G obtained by choosing a root x and recursing on the connected components of G−x to produce the subtrees of x. Elimination trees appear in many guises in computer science and discrete mathematics, and they are closely related to centered colorings and tree-depth. They also encode many interesting combinatorial objects, such as bitstrings, permutations and binary trees. We apply the recent Hartung-Hoang-Mütze-Williams combinatorial generation framework to elimination trees, and prove that all elimination trees for a chordal graph G can be generated by tree rotations using a simple greedy algorithm (see www.combos.org/elim). This yields a short proof for the existence of Hamilton paths on graph associahedra of chordal graphs. Graph associahedra are a general class of high-dimensional polytopes introduced by Carr, Devadoss, and Postnikov, whose vertices correspond to elimination trees and whose edges correspond to tree rotations. As special cases of our results, we recover several classical Gray codes for bitstrings, permutations and binary trees, and we obtain a new Gray code for partial permutations. Our algorithm for generating all elimination trees for a chordal graph G can be implemented in time O(m+n) per generated elimination tree, where m and n are the number of edges and vertices of G, respectively. If G is a tree, we improve this to a loopless algorithm running in time O(1) per generated elimination tree. We also prove that our algorithm produces a Hamilton cycle on the graph associahedron of G, rather than just Hamilton path, if the graph G is chordal and 2-connected. Moreover, our algorithm characterizes chordality, i.e., it computes a Hamilton path on the graph associahedron of G if and only if G is chordal.
 
This is joint work with Jean Cardinal (ULB) and Arturo Merino (TU Berlin)
16.09.38056
Daniel Kráľ
Masaryk University in Brno
Informatyka Teoretyczna
Uniform Turán density of hypergraphs

In the early 1980s, Erdős and Sós, initiated the study of the classical Turán problem with a uniformity condition: the uniform Turán density of a hypergraph H is the infimum over all d for which any sufficiently large hypergraph with the property that all its linear-size subhyperghraphs have density at least d contains H. In particular, they raise the questions of determining the uniform Turán densities of K43, the complete 4-vertex 3-uniform hypergraph, and K43-, the hypergraph K43 with an edge removed. The latter question was solved only recently in [Israel J. Math. 211 (2016), 349–366] and [J. Eur. Math. Soc. 97 (2018), 77–97], while the former still remains open for almost 40 years.

Prior to our work, the hypergraph K43- was the only 3-uniform hypergraph with non-zero uniform Turán density determined exactly. During the talk, we will present the following two results:

  • We construct a family of 3-uniform hypergraphs with uniform Turán density equal to 1/27. This answers a question of Reiher, Rödl and Schacht [J. London Math. Soc. 97 (2018), 77–97] on the existence of such hypergraphs, stemming from their result that the uniform Turán density of every 3-uniform hypergraph is either 0 or at least 1/27.
  • We show that the uniform Turán density of the tight 3-uniform cycle of length k5 is equal to 4/27 if k is not divisible by three, and equal to zero otherwise. The case k=5 resolves a problem suggested by Reiher [European J. Combin. 88 (2020), 103117].

The talk is based on results obtained jointly with (subsets of) Matija Bucić, Jacob W. Cooper, Frederik Garbe, Ander Lamaison, Samuel Mohr and David Munhá Correia.

Poprzednie referaty

13.01.16153
Louis Esperet
Université Grenoble Alpes
Informatyka Teoretyczna
Universal graphs and labelling schemes

Given a graph class C, a graph G is universal for C if it "contains" all the graphs from C. As there are several notions of containment, there are several notions of universal graphs. In this talk I'll mention two versions:

  • induced-universal graphs: here G contains all the graphs of C as induced subgraphs
  • isometric-universal graphs: here G contains isometric copies of all the graphs from C (isometric means that the distances are preserved in the embedding)

Note that an isometric copy is an induced copy, so the second notion is stronger. These notions are closely related to the notion of labelling schemes in graphs. The goal is to assign labels to the vertices of each graph G from C such that upon reading the labels of any two vertices u and v, we know some properties of u and v in G (whether they are adjacent, or their distance, or whether u is reachable from v if G is a digraph). It turns out that minimizing the size of the labels is equivalent to constructing small universal graphs, at least in the case of induced-universal graphs. For isometric-universal graphs some additional work needs to be done.

I'll survey some recent progress in this area. In particular I'll show how to construct induced-universal graphs of almost optimal size for any hereditary class, using the regularity lemma. In particular this implies almost optimal reachabilty labelling schemes in digraphs and comparability labelling schemes in posets, and the construction of an almost optimal universal poset for the class of all n-element posets (of size 2n/4+o(n)). I will also show how to construct isometric-universal graphs of size 3n+o(n) for the class of all n-vertex graphs, answering a question of Peter Winkler.

Based on joint work with Marthe Bonamy, Cyril Gavoille, Carla Groenland, and Alex Scott.

04.01.70911
Bartłomiej Bosek
Optymalizacja Kombinatoryczna
Local Dimension of Planar Poset

In 1981, Kelly showed that planar posets can have arbitrarily large dimension. However, the posets in Kelly's example have bounded Boolean dimension and bounded local dimension, leading naturally to the questions as to whether either Boolean dimension or local dimension is bounded for the class of planar posets. The question for Boolean dimension was first posed by Nešetril and Pudlák in 1989 and remains unanswered today. The concept of local dimension is quite new, introduced in 2016 by Ueckerdt. In just the last year, researchers have obtained many interesting results concerning Boolean dimension and local dimension, contrasting these parameters with the classic Dushnik-Miller concept of dimension, and establishing links between both parameters and structural graph theory, path-width and tree-width in particular. Here we show that local dimension is not bounded on the class of planar posets. Our proof also shows that the local dimension of a poset is not bounded in terms of the maximum local dimension of its blocks, and it provides an alternative proof of the fact that the local dimension of a poset cannot be bounded in terms of the tree-width of its cover graph, independent of its height. This is a joint work with Jarosław Grytczuk and W.T. Trotter.

Bartłomiej Bosek, Jarosław Grytczuk, William T. Trotter. Local Dimension is Unbounded for Planar Posets. arXiv, pages 1-12, 2017.

(the seminar will only be online)

28.05.29842
Bartosz Walczak
Informatyka Teoretyczna
Approximating Pathwidth for Graphs of Small Treewidth

We describe a polynomial-time algorithm which, given a graph G with treewidth t, approximates the pathwidth of G to within a ratio of O(t log t). This is the first algorithm to achieve an f(t)-approximation for some function f.

Our approach builds on the following key insight: every graph with large pathwidth has large treewidth or contains a subdivision of a large complete binary tree. Specifically, we show that every graph with pathwidth at least th+2 has treewidth at least t or contains a subdivision of a complete binary tree of height h+1. The bound th+2 is best possible up to a multiplicative constant. This result was motivated by, and implies (with c=2), the following conjecture of Kawarabayashi and Rossman (SODA'18): there exists a universal constant c such that every graph with pathwidth Ω(kc) has treewidth at least k or contains a subdivision of a complete binary tree of height k.

Our main technical algorithm takes a graph G and some (not necessarily optimal) tree decomposition of G of width t' in the input, and it computes in polynomial time an integer h, a certificate that G has pathwidth at least h, and a path decomposition of G of width at most (t'+1)h+1. The certificate is closely related to (and implies) the existence of a subdivision of a complete binary tree of height h. The approximation algorithm for pathwidth is then obtained by combining this algorithm with the approximation algorithm of Feige, Hajiaghayi, and Lee (STOC'05) for treewidth.

 

Joint work with Carla Groenland, Gwenaël Joret, and Wojciech Nadara.

18.12.13414
Bartłomiej Bosek
Optymalizacja Kombinatoryczna
Majority choosability of digraphs

One of the variations of coloring a graph is assigning colors to vertices such that for each vertex v, at most half of the neighbors of v have the same color as v. Such coloring is called majority coloring of the graph. The goal is to color the graph as a majority with the fewest possible colors. During the presentation, various variants of this problem, historical results, the latest results as well as still intriguing hypotheses will be discussed. Among other things, the results of joint cooperation with Marcin Anholcer, Jarosław Grytczuk, and Gabriel Jakóbczak will be presented.

  1. László Lovász, On decomposition of graphs Studia Scientiarum Mathematicarum Hungarica. A Quarterly of the Hungarian Academy of Sciences, 1, 237–238, 1966.
  2. Paul D. Seymour, On the two-colouring of hypergraphs, The Quarterly Journal of Mathematics. Oxford. Second Series, 25, 303–312, 1974.
  3. Dominic van der Zypen, Majority coloring for directed graphs MathOverflow, 2016.
  4. Stephan Kreutzer, Sang-il Oum, Paul D. Seymour, Dominic van der Zypen, and David R. Wood, Majority colourings of digraphs, Electronic Journal of Combinatorics, 24(2):Paper 2.25, 9, 2017.
  5. Marcin Anholcer, Bartłomiej Bosek, Jarosław Grytczuk, Majority Choosability of Digraphs Electronic Journal of Combinatorics, 24(3), Paper 3.57, 2017.
  6. António Girao, Teeradej Kittipassorn, Kamil Popielarz, Generalized majority colourings of digraphs, Combinatorics, Probability and Computing, 26(6), 850–855, 2017.
  7. Fiachra Knox and Robert Samal, Linear bound for majority colourings of digraphs, Electronic Journal of Combinatorics, 25(3):Paper 3.29, 4, 2018.
  8. Bartłomiej Bosek, Jarosław Grytczuk, Gabriel Jakóbczak, Majority Coloring Game, Discrete Applied Mathematics, 255, 15–20, 2019.

(the seminar will only be online)

23.03.81862
Bartłomiej Bosek
Optymalizacja Kombinatoryczna
Majority choosability of digraphs

One of the variations of coloring a graph is assigning colors to vertices such that for each vertex v, at most half of the neighbors of v have the same color as v. Such coloring is called majority coloring of the graph. The goal is to color the graph as a majority with the fewest possible colors. During the presentation, various variants of this problem, historical results, the latest results as well as still intriguing hypotheses will be discussed. Among other things, the results of joint cooperation with Marcin Anholcer, Jarosław Grytczuk, and Gabriel Jakóbczak will be presented.

  1. László Lovász, On decomposition of graphs Studia Scientiarum Mathematicarum Hungarica. A Quarterly of the Hungarian Academy of Sciences, 1, 237–238, 1966.
  2. Paul D. Seymour, On the two-colouring of hypergraphs, The Quarterly Journal of Mathematics. Oxford. Second Series, 25, 303–312, 1974.
  3. Dominic van der Zypen, Majority coloring for directed graphs MathOverflow, 2016.
  4. Stephan Kreutzer, Sang-il Oum, Paul D. Seymour, Dominic van der Zypen, and David R. Wood, Majority colourings of digraphs, Electronic Journal of Combinatorics, 24(2):Paper 2.25, 9, 2017.
  5. Marcin Anholcer, Bartłomiej Bosek, Jarosław Grytczuk, Majority Choosability of Digraphs Electronic Journal of Combinatorics, 24(3), Paper 3.57, 2017.
  6. António Girao, Teeradej Kittipassorn, Kamil Popielarz, Generalized majority colourings of digraphs, Combinatorics, Probability and Computing, 26(6), 850–855, 2017.
  7. Fiachra Knox and Robert Samal, Linear bound for majority colourings of digraphs, Electronic Journal of Combinatorics, 25(3):Paper 3.29, 4, 2018.
  8. Bartłomiej Bosek, Jarosław Grytczuk, Gabriel Jakóbczak, Majority Coloring Game, Discrete Applied Mathematics, 255, 15–20, 2019.

(the seminar will only be online)

05.12.29864
Bartosz Wodziński
Optymalizacja Kombinatoryczna
On the unique games conjecture

For many hard problems, instead of solving them directly, we need good approximation algorithms. Apart from good their time complexity and decent approximation factor guarantee, we would like to know whether they achieve the best possible approximation ratio (assuming P ≠ NP) possible. Unfortunately, for many NP-complete problems, there is a huge gap between best-known approximation ratio and the ratio that is proved to be unachievable in polynomial time. For instance, for Vertex Cover problem, we don't know any algorithm having a better ratio than 2, and it has been proved in 2005 that it is impossible to get a better ratio than ~1.36. As an attempt to fill in this gap, in 2002, the so-called Unique Games Conjecture was formulated by Khot. It states that having a (1-𝜀)-satisfiable instance of Unique Label Cover problem, it is NP-hard to find a solution satisfying even epsilon fraction of constraints. Assuming it, we are able to prove many tight inapproximability results, for example, it implies that Goemans-Williamson Algorithm for Max-Cut problem achieves the best possible approximation rate. It also follows that we cannot get any better ratio than 2 in the case of Vertex Cover problem. The Unique Games Conjecture is an unusual open problem since the academic world is about evenly divided on whether it is true or not. During the seminar, I will cover this conjecture in more details giving more examples of its influence and presenting recent progress in order to prove it.

  1. Subhash Khot. On the Unique Games Conjecture (Invited Survey). 2010 IEEE 25th Annual Conference on Computational Complexity, Cambridge, MA, 2010, pp. 99-121.
  2. Bartosz Wodziński. Unique Games Conjecture. slides. 2020.

(the seminar will only be online)

25.11.79123
Rafał Byczek
Optymalizacja Kombinatoryczna
Wegner’s conjecture - colouring the square of a planar graph

The square G2 of a graph G is the graph with the same vertex set in which two vertices are joined by an edge if their distance in G is at most two. The chromatic number of the square of a graph G is between D + 1 and D+ 1, where D is the maximum degree of G. Equivalently, the square coloring of a graph is to color the vertices of a graph at distance at most 2 with different colors.

In 1977, Gerd Wegner proved that the square of cubic planar graphs is 8-colorable. He conjectured that his bound can be improved - the chromatic number of G2 is at most 7, if D = 3, at most D + 5, if 4 ≤ D ≤ 7, and [3D / 2] + 1, otherwise. Wegner also gave some examples to illustrate that these upper bounds can be obtained.

C. Thomassen (2006) proved the conjecture is true for planar graphs with D = 3. The conjecture is still open for planar graphs with D ≥ 4. However several upper bounds in terms of maximum degree D have been proved as follows. In 1993, Jonas proved that χ(G2) ≤ 9D-19, for planar graphs with D ≥ 5. Agnarsson and Halldorson showed that for every planar graph G with maximum degree D ≥ 749, χ(G2) ≤ [9D / 5] + 2. Van den Heuvel and McGuinness (2003) showed that χ(G2) ≤ 2D + 25, Bordin (2002) proved that χ(G2) ≤ [9D / 5] + 1, if D ≥ 47, and Molloy and Salavatipour (2005) proved χ(G2) ≤ [5D / 3] + 78, moreover, χ(G2) ≤ [5D / 3] + 25 if D ≥ 241. Moreover, conjecture is confirmed in the case of outerplanar graphs and graphs without K4 minor.

The aim of the seminar will be to present the main facts about Wegner’s conjecture and main techniques and ideas which were used to prove some upper bounds. The presentation will be based on my master thesis.

  1. Rafał Byczek. Wegner’s conjecture Colouring the square of a planar graph. slides. 2020.

(the seminar will only be online)

14.05.76272
Szymaon Kapała
Podstawy Informatyki
Searching for shortest and least programs by Cristian Caludea, Sanjay Jain, Wolfgang Merkle and Frank Stephan
The Kolmogorov complexity of a string x is defined as the length of a shortest program p of x for some appropriate universal machine U, that is, U(p) =x and p is a shortest string with this property. Neither the plain nor the prefix-free version of Kolmogorov complexity are recursive but for both versions it is well-known that there are recursive exact Solovay functions, that is, recursive upper bounds for Kolmogorov complexity that are infinitely often tight. Let a coding function for a machine M be a function f such that f(x) is always a program of x for M. From the existence of exact Solovay functions it follows easily that for every universal machine there is a recursive coding function that maps infinitely many strings to a shortest program. Extending a recent line of research, in what follows it is investigated in which situations there is a coding function for some universal machine that maps infinitely many strings to the length-lexicographically least program. The main results which hold in the plain as well as in the prefix-free setting are the following. For every universal machine there is a recursive coding function that maps infinitely many strings to their least programs. There is a partial recursive coding function (defined in the natural way) for some universal machine that for every set maps infinitely many prefixes of the set to their least programs. Exactly for every set that is Bennett shallow (not deep), there is a recursive coding function for some universal machine that maps all prefixes of the set to their least programs. Differences between the plain and the prefix-free frameworks are obtained by considering effective sequences I_1, I_2, ...of mutually disjoint finite sets and asking for a recursive coding function for some universal machine that maps at least one string in each set I_n to its least code. Such coding functions do not exist in the prefix-free setting but exist in the plain setting in case the sets I_n are not too small.
23.01.32579
Adam Polak
Informatyka Teoretyczna
Monochromatic triangles, intermediate matrix products, and convolutions

The most studied linear algebraic operation, matrix multiplication, has surprisingly fast O(nω) time algorithms, for ω<2.373. On the other hand, the (min,+)-product, which is at the heart of APSP, is widely conjectured to require cubic time. There is a plethora of matrix products and graph problems whose complexity seems to lie in the middle of these two problems, e.g. the (min,max)-product, the min-witness product, APSP in directed unweighted graphs. The best runtimes for these "intermediate" problems are all O(n(3+ω)/2). A similar phenomenon occurs for convolution problems.


Can one improve upon the running times for these intermediate problems? Or, alternatively, can one relate these problems to each other and to other key problems in a meaningful way?

We make progress on these questions by providing a network of fine-grained reductions. We show for instance that APSP in directed unweighted graphs and min-witness product can be reduced to both the (min,max)-product and a variant of the monochromatic triangle problem. We also show that a natural convolution variant of monochromatic triangle is equivalent to the famous 3SUM problem. As this variant is in O(n3/2) time and 3SUM is in O(n2) time, our result gives the first fine-grained equivalence between natural problems of different running times.

Joint work with Andrea Lincoln and Virginia Vassilevska Williams.

23.12.76275
Piotr Mikołajczyk
Podstawy Informatyki
Satisfiability in Strategy Logic can be Easier than Model Checking by Erman Acar, Massimo Benerecetti and Fabio Mogavero.

In the design of complex systems, model-checking and satisfiability arise as two prominent decision problems. While
model-checking requires the designed system to be provided in advance, satisfiability allows to check if such a system even exists. With very few exceptions, the second problem turns out to be harder than the first one from a complexity-theoretic standpoint. In this paper, we investigate the connection between the two problems for a non-trivial fragment of Strategy Logic (SL, for short). SL extends LTL with first-order quantifications over strategies, thus allowing to explicitly reason about the strategic abilities of agents in a multi-agent system. Satisfiability for the full logic is known to be highly undecidable, while model-checking is non-elementary.

The SL fragment we consider is obtained by preventing strategic quantifications within the scope of temporal operators. The resulting logic is quite powerful, still allowing to express important game-theoretic properties of multi-agent systems, such as existence of Nash and immune equilibria, as well as to formalize the rational synthesis problem. We show that satisfiability for such a fragment is PSPACE-COMPLETE, while its model-checking complexity is 2EXPTIME-HARD. The result is obtained by means of an elegant encoding of the problem into the satisfiability of conjunctive-binding first-order logic, a recently discovered decidable fragment of first-order logic.

31.07.48951
Piotr Helm, Krzysztof Zysiak

Optimal Sorting with Persistent Comparison Errors [B. Geissmann et al.]
Rozważamy problem sortowania n elementów w przypadku stałego błędu porównań. W tym problemie, każde porównanie między dwoma elementami może się pomylić ze stałym (małym) prawdopodobieństwem, i porównania nie mogą zostać powtórzone. Perfekcyjne posortowanie w tym modelu jest niemożliwe i celem jest zminimalizowanie dyslokacji każdego z elementów w zwróconym ciągu, czyli odległość od jego poprawnej pozycji. Istniejące ograniczenia dolne dla tego problemu pokazują, że żaden algorytm nie zagwarantuje z wysokim prawdopodobieństwem maksymalnej i sumarycznej dyslokacji lepszej niż Ω(logn) i Ω(n), odpowiednio, bez względu na czas działania.
W tej pracy, prezentujemy pierwszy sortujący algorytm o złożoności O(n log n), który gwarantuje zarówno maksymalna dyslokację O(log n), jak i sumaryczną dyslokację O(n) z wysokim prawdopodobieństwem. To rozstrzyga złożoność czasową tego problemu i pokazuje, że błędy porównań nie zwiększają jego złożoności czasowej: ciąg z najlepszą możliwą dyslokacją może zostać uzyskany w czasie O(n logn), i nawet bez błędów porównań czas Ω(n log n) jest konieczny, by zagwarantować takie ograniczenia dyslokacji.
Aby osiągnąć ten optymalny wynik, rozwiązujemy dwa podproblemy, za pomocą metod, które mogą mieć dalsze, osobne zastosowania. Jednym z nich jest zlokalizowanie pozycji, na którą należy wstawić element do prawie posortowanego ciągu o dyslokacji O(log n)  w taki sposób, aby dyslokacja zwracanego ciągu wciąż była O(logn). Drugi problem - jak równocześnie wstawić m elementów w prawie posortowany ciąg innych m elementów, tak aby zwracany ciąg 2m elementów pozostał prawie posortowany.
27.12.29840
Bartłomiej Bosek
Optymalizacja Kombinatoryczna
Majority choosability of digraphs

One of the variations of graph coloring is the assignment of colors to vertices such that for each v vertex, at most half of the neighbors v have the same color as v. Such coloring is called the majority coloring of the graph. The goal is to color the graph in the above way with the smallest number of colors. During the presentation various variants of this problem will be discussed, historical results, the latest results, and still intriguing hypotheses. Among other things, the effects of joint cooperation with Marcin Anholcer, Jarosław Grytczuk, and Gabriel Jakóbczak will be presented.

  1. László Lovász, On decomposition of graphs Studia Scientiarum Mathematicarum Hungarica. A Quarterly of the Hungarian Academy of Sciences, 1, 237–238, 1966.
  2. Paul D. Seymour, On the two-colouring of hypergraphs, The Quarterly Journal of Mathematics. Oxford. Second Series, 25, 303–312, 1974.
  3. Dominic van der Zypen, Majority coloring for directed graphs MathOverflow, 2016.
  4. Stephan Kreutzer, Sang-il Oum, Paul D. Seymour, Dominic van der Zypen, and David R. Wood, Majority colourings of digraphs, Electronic Journal of Combinatorics, 24(2):Paper 2.25, 9, 2017.
  5. Marcin Anholcer, Bartłomiej Bosek, Jarosław Grytczuk, Majority Choosability of Digraphs Electronic Journal of Combinatorics, 24 (3), Paper 3.57, 2017.
  6. António Girão, Teeradej Kittipassorn, Kamil Popielarz, Generalized majority colourings of digraphs, Combinatorics, Probability and Computing, 26(6), 850–855, 2017.
  7. Fiachra Knox and Robert Šámal, Linear bound for majority colourings of digraphs, Electronic Journal of Combinatorics, 25(3):Paper 3.29, 4, 2018.
  8. Bartłomiej Bosek, Jarosław Grytczuk, Gabriel Jakóbczak, Majority Coloring Game, Discrete Applied Mathematics, 255, 15 – 20, 2019.
23.06.43420
Przemysław Rutka (Lublin)
Podstawy Informatyki
Wybrane algorytmiczne zastosowania klasycznych wielomianów ortogonalnych
Klasyczne wielomiany ortogonalne, odpowiadające im klasyczne funkcje wagowe oraz ich własności znajdują wiele zastosowań w takich chociażby obszarach jak tomografia, mechanika kwantowa, kombinatoryka, przetwarzanie obrazów i sygnałów, kompresja danych oraz zwiększanie wydajności algorytmów. W tym ostatnim zakresie cały czas uzyskuje się wiele ciekawych wyników, pozwalających na efektywne numeryczne rozwiązywanie różnych problemów. Można do tych problemów w szczególności zaliczyć barycentryczne interpolacje Fejéra, Hermite'a i Lagrange'a oraz problemy ekstremalne typu Szegő i Markowa-Bernsteina. W pierwszym przypadku, gdy interpolowanych jest n wartości w węzłach, będących zerami klasycznych wielomianów ortogonalnych, możliwa jest poprawa złożoności obliczeniowej algorytmów, obliczających wartości wielomianów interpolacyjnych w oparciu o wzory barycentryczne, z O(n^2) do O(n). Wymagane jest w tym celu zastosowanie odpowiednich jawnych wzorów na wagi barycentryczne lub wzorów wiążących wagi barycentryczne z wagami i węzłami kwadratur Gaussa. Z kolei w drugim przypadku, jak się okazuje powiązanym z pierwszym, daje się sformułować wzory, pozwalające bezpośrednio obliczać na komputerze najlepsze stałe, występujące w nierównościach typu Szegő i Markowa-Bernsteina oraz wartości wielomianów ekstremalnych, dla których te nierówności stają się równościami. Nierówności te związane są z iterowanymi klasycznymi funkcjami wagowymi i można je wykorzystać do szacowania wartości lub norm pochodnych D^{k}p lub różnic progresywnych Δ^{k}p wielomianów p(x), odpowiednio w przypadku ciągłym lub dyskretnym.
     Inne tego typu rezultaty, korzystające z klasycznych wag i/lub klasycznych wielomianów ortogonalnych, można otrzymać także dla problemu typu izoperymetrycznego w klasie płaskich, zamkniętych krzywych wielomianowych, problemu równowagi elektrostatycznej układu ładunków, problemu efektywnej, stabilnej i najbardziej ekonomicznej interpolacji oraz problemu dwustronnych oszacowań aproksymacyjnych a priori typu Chernoffa.
09.03.68171
Bartłomiej Bosek
Informatyka Teoretyczna
Algorithms for posets and graphs games – coloring and matching

Graph colorings and online algorithms on graphs constitute the key fragments of the algorithmic graph theory. Specifically, the subject of this study will be a presentation of the results concerning

  • different variants of coloring of graphs and partially ordered sets,
  • online coloring of partially ordered sets,
  • incremental matchings of bipartite graphs.

The first part of the talk will concern different aspects of the coloring problem as well as different evidential techniques. The presented results concern majority choosability of digraphs, harmonious coloring of hypergraphs and semi-uni conjecture of product of two posets. The next part of presentation will concern online chain partitioning of posets. There will be presented a full characterization of the class of posets, for which the number of colors (chains) used by first-fit is a function of width, i.e. best offline solution. This part will also present two different subexponential online algorithm for the online chain partitioning problem. The last part will concern the incremental matching problem in bipartite graphs. There will be presented an incremental algorithm that maintains the maximum size matching in total time equal the running time of one of the fastest offline maximum matching algorithm that was given by Hopcroft and Karp. Moreover, I will show an analysis of the shortest augmenting path algorithm.

This is joint work with Marcin Anholcer, Jarosław Grytczuk, Sebastian Czerwiński, Paweł Rzążewski, Stefan Felsner, Kolja Knauer, Grzegorz Matecki, Tomasz Krawczyk, H. A. Kierstead, Matthew Smith, Dariusz Leniowski, Piotr Sankowski, Anna Zych-Pawlewicz.

29.09.76384
Michał Wrona
Informatyka Teoretyczna
Relational Width of First-Order Expansions of Homogeneous Graphs with Bounded Strict Width

We study the amount of consistency (measured by relational width) needed to solve the CSP parametrized by first-order expansions of countably infinite homogeneous graphs, that are, the structures first-order-definable in a homogeneous graph containing the edge relation E, the relation N that holds between different vertices not connected by an edge and the equality. We study our problem for structures that additionally have bounded strict width, i.e., establishing local consistency of an instances of the CSP not only decides if there is a solution but also ensures that every solution may be obtained from a locally consistent instance by greedily assigning values to variables, without backtracking. It is known that with every countably infinite homogeneous graph G the finite unique minimal set S of finite graphs is associated such that some finite H is an induced substructure of G if and only if there is no H' in S such that H' embeds into H.

Our main result is that structures under consideration have relational width exactly (2,L) where L is the maximal size of a forbidden subgraph in S, but not smaller than 3. It implies, in particular, that the CSP parametrized by such structures may be solved in time O(nm) where n is the number of variables and m is the maximum of L and the arities of relations in the signature, while strict width l ensures time O(nl+1). Furthermore, since L may be arbitrarily large, our result contrasts the collapse of the relational bounded width hierarchy for finite structures, in which case, if finite, relational width is always at most (2,3). 

Finally, we show that certain maximally tractable constraint languages with a first-order definition in a countably infinite homogeneous graph whose CSPs are already known to be solvable in polynomial time by other algorithms may be also solved by establishing consistency. Thus, providing an alternative algorithm for already studied problems.

04.02.62636
Rafał Byczek, Bruno Pitrus

Approximating Edit Distance Within Constant Factor in Truly Sub-Quadratic Time

Odległość edycyjna to jeden ze sposobów zmierzenia jak bardzo dwa ciągi znaków są do siebie podobne. Polega on na zliczeniu minimalnej liczby operacji wstawienia, usunięcia lub zmienienia znaku na inny, wymaganej aby przekształcić jedno słowo w drugie. W tej pracy autorzy skupili się na problemie złożoności obliczeniowej aproksymowania odległości edycyjnej pomiędzy parą słów.

Problem wyznaczenia dokładnej odległości edycyjnej może być rozwiązany za pomocą klasycznego algorytmu dynamicznego działającego w kwadratowym czasie. W 2010 roku Andoni, Krauthgamer i Onak przedstawili działający w czasie prawie liniowym, algorytm aproksymujący odległość edycyjną z polilogarytmicznym czynnikiem aproksymacji. W 2014 Backurs i Indyk pokazali, że dokładny algorytm działający w czasie O(n^(2-δ))implikowałby szybki algorytm dla SAT i fałszywość silnej hipotezy o czasie wykładniczym (SETH). Ponadto, ostatnio w 2017, Abboud i Backurs pokazali, że istnienie algorytmu aproksymującego odległość edycyjną w czasie prawdziwie podkwadratowym z czynnikiem aproksymacji 1 + o(1) implikowałoby fałszywość paru hipotez dotyczących złożoności obwodów boolowskich (circuit complexity). To poddaje w wątpliwość aproksymowalność odległości edycyjnej z dokładnością do czynnika stałego w czasie prawdziwie podkwadratowym.

W tej pracy autorzy jednak odpowiedzieli twierdząco na to pytanie, przedstawiając bardzo ciekawy algorytm aproksymujący odległość edycyjną, z stałym czynnikiem aproksymacji i dowodząc, że jego czas działania jest ograniczony od góry przez Õ(n^(2−2/7)).

05.08.70908
Bartłomiej Bosek
Optymalizacja Kombinatoryczna
A new variant of the game of cops and robber

We consider the following metric version of the Cops and Robbers game. Let G be a simple graph and let k≥1 be a fixed integer. In the first round, Cop picks a subset of k vertices B={v1,v2,...,vk} and then Robber picks a vertex u but keeps it in a secret. Then Cop asks Robber for a vector Du(B)=(d1,2,...,dk) whose components di=dG(u,vi), i=1,2,...,k, are the distances from u to the vertices of B. In the second round, Robber may stay at the vertex u or move to any neighbouring vertex which is kept in a secret. Then Cop picks another k vertices and asks as before for the corresponding distances to the vertex occupied by Robber. And so on in every next round. The game stops when Cop determines exactly the current position of Robber. In that case, she is the winner. Otherwise, Robber is the winner (that is if Cop is not able to localise him in any finite number of rounds). Let ζ(G) denote the least integer k for which Cop has a winning strategy. Notice that this parameter is well defined since the inequality ζ(G)≤|V(G)| holds obviously. The aim of the talk is to present results concerning 2-trees, outerplanar graphs and planar graphs. This is a joint work with Przemysław Gordinowicz, Jarosław Grytczuk, Nicolas Nisse, Joanna Sokół, and Małgorzata Śleszyńska-Nowak.

Bartłomiej Bosek, Przemysław Gordinowicz, Jarosław Grytczuk, Nicolas Nisse, Joanna Sokół, Małgorzata Śleszyńska-Nowak. Centroidal localization game. arXiv, pages 1-15, 2017.

Bartłomiej Bosek, Przemysław Gordinowicz, Jarosław Grytczuk, Nicolas Nisse, Joanna Sokół, Małgorzata Śleszyńska-Nowak. Localization game on geometric and planar graphs. arXiv, pages 1-15, 2017.

31.03.51743
Bartłomiej Bosek
Optymalizacja Kombinatoryczna
Local Dimension is Unbounded for Planar Posets

In 1981, Kelly showed that planar posets can have arbitrarily large dimension. However, the posets in Kelly's example have bounded Boolean dimension and bounded local dimension, leading naturally to the questions as to whether either Boolean dimension or local dimension is bounded for the class of planar posets. The question for Boolean dimension was first posed by Nešetril and Pudlák in 1989 and remains unanswered today. The concept of local dimension is quite new, introduced in 2016 by Ueckerdt. In just the last year, researchers have obtained many interesting results concerning Boolean dimension and local dimension, contrasting these parameters with the classic Dushnik-Miller concept of dimension, and establishing links between both parameters and structural graph theory, path-width and tree-width in particular. Here we show that local dimension is not bounded on the class of planar posets. Our proof also shows that the local dimension of a poset is not bounded in terms of the maximum local dimension of its blocks, and it provides an alternative proof of the fact that the local dimension of a poset cannot be bounded in terms of the tree-width of its cover graph, independent of its height. This is a joint work with Jarosław Grytczuk and W.T. Trotter.

Bartłomiej Bosek, Jarosław Grytczuk, William T. Trotter. Local Dimension is Unbounded for Planar Posets. arXiv, pages 1-12, 2017.

26.05.18778
Bruno Pitrus
Podstawy Informatyki
Linear lambda terms as invariants of rooted trivalent maps by Noam Zeilberger

The main aim of the article is to give a simple and conceptual account for the correspondence (originally described by Bodini, Gardy, and Jacquot) between \alpha equivalence classes of closed linear lambda terms and isomorphism classes of rooted trivalent maps on compact oriented surfaces without boundary, as an instance of a more general correspondence between linear lambda terms with a context of free variables and rooted trivalent maps with a boundary of free edges. We begin by recalling a familiar diagrammatic representation for linear lambda terms, while at the same time explaining how such diagrams may be read formally as a notation for endomorphisms of a reflexive object in a symmetric monoidal closed (bi)category. From there, the “easy” direction of the correspondence is a simple forgetful operation which erases annotations on the diagram of a linear lambda term to produce a rooted trivalent map. The other direction views linear lambda terms as complete invariants of their underlying rooted trivalent maps, reconstructing the missing information through a Tutte-style topological recurrence on maps with free edges. As an application in combinatorics, we use this analysis to enumerate bridgeless rooted trivalent maps as linear lambda terms containing no closed proper subterms, and conclude by giving a natural reformulation of the Four Color Theorem as a statement about typing in lambda calculus.

 

 

27.05.40681
Dawid Pyczek i Jakub Rówiński
Podstawy Informatyki
Asymptotic Density and the Theory of Computability by CARL JOCKUSCH AND PAUL SCHUPP
The purpose of this paper is to survey recent work on how classical asymptotic density interacts with the theory of computability. We have tried to make the survey accessible to those who are not specialists in computability theory and we mainly state results without proof, but we include a few
easy proofs to illustrate the flavor of the subject. In complexity theory, classes such as P and NP are defined by using worst-case measures. That is, a problem belongs to the class if there is an algorithm solving it which has a suitable bound on its running time over all instances of the problem. Similarly, in computability theory, a problem is classified as computable if there is a single algorithm which solves all instances of the given problem. There is now a general awareness that worst-case measures may not give a good picture of a particular algorithm or problem since hard instances may be very sparse. The paradigm case is Dantzig’s Simplex Algorithm for linear programming problems. This algorithm runs many hundreds of times every day for scheduling and transportation problems, almost always very quickly. There are clever examples of Klee and Minty showing that there exist instances for which the Simplex Algorithm must take exponential time, but such examples are not encountered in practice. Observations of this type led to the development of average-case complexity by Gurevich and by Levin independently. There are different approaches to the average-case complexity, but they all involve computing the expected value of the running time of an algorithm with respect to some measure on the set of inputs. Thus the problem must be decidable and one still needs to know the worst-case complexity. Another example of hard instances being sparse is the behavior of algorithms for decision problems in group theory used in computer algebra packages. There is often some kind of an easy “fast check” algorithm which quickly produces a solution for “most” inputs of the problem. This is true even if the worst-case complexity of the particular problem is very high or the problem is even unsolvable. Thus many group-theoretic decision problems have a very large set of inputs where the (usually negative) answer can be obtained easily and quickly.
11.08.43528
Tomasz Krawczyk
Informatyka Teoretyczna
Representation and coloring of partially ordered sets under conditions of incomplete information

The purpose of my talk is to discuss several problems related to coloring and construction of appropriate representation for partially ordered sets (also posets) and graph classes derived from posets. In these problems, we will assume the following two scenarios:

in the first scenario, an algorithm receives a poset element one after another. For each new element added, the algorithm takes an irrevocable decision (assigns a color or extends a current representation) just after this element is presented (algorithms that work under such conditions are called on-line).

in the second scenario, an algorithm receives an incomparability graph of some poset and a representation of some parts of this graph, and its task is to check whether this partial representation can be extended to a representation of the whole graph that is appropriate for the considered class of graphs.

In the context of on-line algorithms, we focus our attention on two problems: partitioning posets into chains, which is a special case of on-line coloring of incomparability graphs, and embedding posets into d-dimentional space Rd.

In the context of extending partial representations problems, we are interested in graph classes whose representations introduce a natural order on vertices of these graphs.

We focus our attention on:

  • function graphs that are intersection graphs of continuous functions [0,1] R,

  • permutation graphs that are intersection graphs of linear functions [0,1] R,

  • trapezoid graphs that are intersection graphs of trapezoids spanned between two fixed parallel lines containing the bases of these trapezoids.

31.12.73644
Marcin Pilipczuk
University of Warsaw
Informatyka Teoretyczna
Subexponential Parameterized Algorithms for Planar Graphs, Apex-Minor-Free Graphs and Graphs of Polynomial Growth via Low Treewidth Pattern Covering

We prove the following theorem. Given a planar graph G and an integer k, it is possible in polynomial time to randomly sample a subset A of vertices of G with the following properties:

1) A induces a subgraph of G of treewidth O(√k log k), and

2) for every connected subgraph H of G on at most k vertices, the probability that A covers the whole vertex set of H is at least (2O(√k log2k)nO(1))−1, where n is the number of vertices of G.

Together with standard dynamic programming techniques for graphs of bounded treewidth, this result gives a versatile technique for obtaining (randomized) subexponential parameterized algorithms for problems on planar graphs, usually with running time bound 2O(√k log2k)nO(1). The technique can be applied to problems expressible as searching for a small, connected pattern with a prescribed property in a large host graph; examples of such problems include Directed k-Path, Weighted k-Path, Vertex Cover Local Search, and Subgraph Isomorphism, among others. Up to this point, it was open whether these problems can be solved in subexponential parameterized time on planar graphs, because they are not amenable to the classic technique of bidimensionality. Furthermore, all our results hold in fact on any class of graphs that exclude a fixed apex graph as a minor, in particular on graphs embeddable in any fixed surface. We also provide a similar statement for graph classes of polynomial growth.

In the talk I will first focus on the background and motivation, and then highlight the main ideas of the proof by sketching the proof for the case of graph classes of polynomial growth. Based on joint work with Fedor Fomin, Daniel Lokshtanov, Dániel Marx, Michał Pilipczuk, and Saket Saurabh: http://arxiv.org/abs/1604.05999 and http://arxiv.org/abs/1610.07778.

04.08.84482
Yauheni Ananchuk
Podstawy Informatyki
ALGEBRAIC FOUNDATIONS FOR QUALITATIVE CALCULI AND NETWORKS by ROBIN HIRSCH, MARCEL JACKSON, AND TOMASZ KOWALSKI
Binary Constraint Problems have traditionally been considered as Network Satisfaction Problems over some relation algebra. A constraint network is satisfable if its nodes can be mapped into some representation of the relation algebra in such a way that the constraints are preserved. A qualitative representation is like an ordinary representation, but instead of requiring (a ; b) = a j b , as we do for ordinary representations, we only require that. A constraint network is qualitatively satisfable if its nodes can be mapped to elements of a qualitative representation, preserving the constraints. If a constraint network is satisfable then it is clearly qualitatively satisfable, but the converse can fail.
However, for a wide range of relation algebras including the point algebra, the Allen Interval Algebra, RCC8 and many others, a network is satisfable if and only if it is qualitatively satisfable. Unlike ordinary composition, the weak composition arising from qualitative representations need not be associative, so we can generalise by considering network satisfaction problems over non-associative algebras. We prove that computationally, qualitative representations have many advantages over
ordinary representations: whereas many finite relation algebras have only infnite representations, every finite qualitatively representable algebra has a finite qualitative representation; the representability problem for (the atom structures of) finite non-associative algebras is NP-complete; the network satisfaction problem over a finite qualitatively representable algebra is always ; the validity of equations over qualitative representations is co-NP-complete. On the other hand we prove that there is no finite axiomatisation of the class of qualitatively representable algebra
23.11.2016
Piotr Danilewski
Informatyka Teoretyczna
Functional Code Building

 
A typical language translator uses a parser to convert an input string into some internal representation, usually in a form of an AST. The AST is then analyzed and transformed in passes. In the final pass, the AST is converted into the output, be it a machine code or source code of another language.
 
However, if we try to combine different languages, e.g. for a purpose of a multi-domain project, we may have a problem: each language uses different kinds of AST nodes, has different assumptions on the AST structure and features different pass algorithms. Settling for the common ground by limiting the node types, assumptions, and algorithms limits how the languages may reason about their code. Any high-level information is lost in such representation and high-level transformations cannot be defined.
 
Working on the common AST is similar to unstructured programming: AST acts as a global machine state. Any change to the state, e.g. introduced by a new language, may inadvertely affect the behavior of the existing languages. In regular programming we have moved away from unstructured programming towards higher abstraction levels, e.g. through functional programming. If it can be done to regular programming, can it be done to AST and code builders as well?
 
Our solution treats AST nodes not as objects of a fixed type. Instead, it is a function with its body describing entirely the behavior of such node. Even the connection between the nodes is represented internally by the function, in a form of continuations. 
The passes over the AST are replaced by a single invocation of these functions. The node functions may contain code for multiple stages of code analysis. In order to reduce the run time these additional stages are scheduled through dynamic staging.
 
This gives the power of abstraction to code building. Even nodes come from different languages may interact, as long as they understand the passed arguments.
 
This major change on how a code is represented requires changes in how we think about common basic operations during language interpretation:
- how do we link two nodes/functions together?
- how do we create control flow structures?
- how do we perform name lookups, even across different languages?
- how do we optimize the code so that the AST layer dissapears when generating code?
- what language-specific optimizations can be performed and how do we specify them?
 
We will address these questions in the upcoming talk.

05.10.2016
Tomasz Kisielewski
Podstawy Informatyki
Programy które są w stanie przeprowadzać rozumowania o swoich własnościach
Proving properties of programs within their language

Przedstawię wstępną wersję swojego programu badawczego, mającego
na celu umożliwienie dowodzenia własności programów w języku, w
którym są napisane. Zacznę od motywacji, opierającej się o pewne
rozwiązanie zmodyfikowanego dylematu więźnia, w której graczami
są programy mające dostęp do kodu źródłowego drugiego gracza.
Następnie przybliżę obecny stan automatycznego dowodzenia faktów
o programach, ze szczególnym uwzględnieniem silnie typowanych
języków funkcyjnych. Na koniec przedstawię dalekosiężne cele i
największe trudności, których należy się spodziewać przy
implementacji efektywnych funkcji dowodzących fakty o programach.

======

I will present an initial version of my research program, whose

main goal is to enable proving properties about programs within
the language they are written in. My motivation is based on a
specific solution to the open-source prisoner's dilemma, where
the players are programs with access to the other player's source
code. After explaining the motivation I will shortly present the
current state of automatic proving of programs' properties, with
emphasis on strongly typed functional languages. Finally, I will
introduce my long term goals and the main challenges one should
expect to face when implementing an effective function for
proving facts about programs.

08.06.2016
Kamil Pietruszka
Podstawy Informatyki
Regular Combinators for String Transformations by Rajeev Alur Adam Freilich Mukund Raghothaman

We focus on (partial) functions that map input strings to a monoid such as the set of integers with addition and the set of output strings with concatenation. The notion of regularity for such functions has been defined using two-way finite-state transducers, (one-way) cost register automata, and MSO-definable graph transformations. In this paper, we give an algebraic and machine-independent characterization of this class analogous to the definition of regular languages by regular expressions. When the monoid is commutative, we prove that every regular function can be constructed from constant functions using the combinators of choice, split sum, and iterated sum, that are analogs of union, concatenation, and Kleene *, respectively, but enforce unique (or unambiguous) parsing. Our main result is for the general case of non-commutative monoids, which is of particular interest for capturing regular string-to-string transformations for document processing. We prove that the following additional combinators suffice for constructing all regular functions:

(1) the left-additive versions of split sum and iterated sum, which allow transformations such as string reversal;

(2) sum of functions, which allows transformations such as copying of strings; and

(3) function composition, or alternatively, a new concept of chained sum, which allows output values from adjacent blocks to mix.

27.04.2016
Michał Zieliński
Podstawy Informatyki
Beta Reduction is Invariant, Indeed by Beniamino Accattoli and Ugo Dal Lago

Slot and van Emde Boas weak invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time.Is lambda  calculus a reasonable machine? Is there a way to measure the computational complexity of a lambda term? This paper presents the first complete positive answer to this longstanding problem. Moreover, our answer is completely machineindependent and based over a standard notion in the theory of lambda calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating lambda calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which the length and the size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modelled after linear logic proof nets and admitting a decomposition of leftmostoutermost
derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the lambda calculus. The size explosion problem seems to imply that this is not possible:having the same notions of normal form, evaluation in the LSC is exponentially longer than in the lambda calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those steps that only unshare the output without contributing to bata redexes, i.e. the steps that cause the blow-up in size. The main technical contribution of the paper is indeed the definition of useful reductions and the thorough analysis of their properties.

13.01.2016
Piotr Bejda
Optymalizacja Kombinatoryczna
Perfect matchings in O(n log n) time in regular bipartite graphs

In this paper we consider the well-studied problem of finding a perfect matching in a d-regular bi-partite graph on 2n nodes with m=nd edges. The best-known algorithm for general bipartite graphs (due to Hopcroft and Karp) takes time O(m*sqrt(n)). In regular bipartite graphs, however, a matching is known to be computable in O(m) time (due to Cole, Ost and Schirra). In a recent line of work by Goel, Kapralov and Khanna the O(m) time algorithm was improved first to O'(min(m, n^2.5/d)) and then to O'(min(m,n^2/d)). It was also shown that the latter algorithm is optimal up to polylogarithmic factors among all algorithms that use non-adaptive uniform sampling to reduce the size of the graph as a first step.
In this paper, we give a randomized algorithm that finds a perfect matching in a d-regular graph and runs in O(n log n) time (both in expectation and with high probability). The algorithm performs an appropriately truncated random walk on a modified graph to successively find augmenting paths. Our algorithm may be viewed as using adaptive uniform sampling, an d is thus able to bypass the limitations of (non-adaptive) uniform sampling established in earlier work. We also show that randomization is crucial for obtaining o(nd) time algorithms by establishing an Ω(nd) lower bound for any deterministic algorithm. Our techniques also give an algorithm that successively finds a matching in the support of a doubly stochastic matrix in expected time O(n log2 n) time, with O(m) pre-processing time; this gives a simple O(m + mn log2 n) time algorithm for finding the Birkhoff-von Neumann decomposition of a doubly stochastic matrix.

A. Goel and M. Kapralov and S. Khanna, Perfect matchings in O n log n time in regular bipartite graphs

28.10.2015
Ariel Gabizon
Technion, Israel
Informatyka Teoretyczna
Quasi-Linear Size Zero Knowledge from Linear-Algebraic PCPs

A probabilistically checkable proof (PCP) enables, e.g., checking the satisfiability of a 3-SAT formula ɸ, while only examining a constant number of locations in the proof. A long line of research led to the construction of PCPs with length that is quasi-linear in n := |ɸ|.


In a zero knowledge PCP with knowledge bound K, reading any K symbols of the proof reveals no additional information besides the validity of the statement; e.g., no information is revealed about the assignment satisfying ɸ. Kilian, Petrank, and Tardos gave a transformation from any PCP into a ZK-PCP with knowledge bound K, for any desired K. A drawback of their transformation is that it requires multiplying the proof length by a factor of (at least) K^6.


In this work, we show how to construct PCPs that are zero knowledge for knowledge bound K and of length quasilinear in K and n, provided that the prover is forced to write the proof in two rounds. In this model, which we call duplex PCP (DPCP), the verifier gets an oracle string from the prover, then replies with some randomness, and then gets another oracle string from the prover, and it can make up to K queries to both oracles.


Deviating from previous works, our constructions do not invoke the PCP Theorem as a blackbox but rely on algebraic properties of a specific family of PCPs. We show that if the PCP has a certain linear algebraic structure (which many constructions have, including [BFLS91,ALMSS98,BS08]) we can add the zero knowledge property at virtually no cost while introducing only minor modifications in the algorithms of the prover and verifier. We believe that our linear-algebraic characterization of PCPs may be of independent interest, as it gives a simplified way to view previous well-studied PCP constructions.


Joint work with Eli Ben-Sasson, Alessandro Chiesa and Madars Virza
27.05.2015
Paweł Zegartowski
Informatyka Teoretyczna
Cache-Oblivious Hashing
The hash table, especially its external memory version, is one of the most important index structures in large databases. Assuming a truly random hash function, it is known that in a standard external hash table with block size b, searching for a particular key only takes expected average t_q=1+1/2^Ω(b) disk accesses for any load factor α bounded away from 1. However, such near-perfect performance is achieved only when b is known and the hash table is particularly tuned for working with such a blocking. In this paper we study if it is possible to build a cache-oblivious hash table that works well with any blocking. Such a hash table will automatically perform well across all levels of the memory hierarchy and does not need any hardware-specific tuning, an important feature in autonomous databases.

We first show that linear probing, a classical collision resolution strategy for hash tables, can be easily made cache-oblivious but it only achieves t_q=1+Θ(α/b) even if a truly random hash function is used. Then we demonstrate that the block probing algorithm (Pagh et al. in SIAM Rev. 53(3):547558, 2011) achieves t_q=1+1/2^Ω(b), thus matching the cache-aware bound, if the following two conditions hold: (a) b is a power of 2; and (b) every block starts at a memory address divisible by b. Note that the two conditions hold on a real machine, although they are not stated in the cacheoblivious model. Interestingly, we also show that neither condition is dispensable: if either of them is removed, the best obtainable bound is t_q=1+O(α/b), which is exactly what linear probing achieves.  


References:
Rasmus Pagh, ZheweiWei, Ke Yi, Qin Zhang, Cache-Oblivious Hashing, Algorithmica (2014) 69:864-883
15.04.2015
Maciej Bendkowski
Informatyka Teoretyczna
Contention Resolution under Selfishness
In many communications settings, such as wired and wireless local-area networks, when multiple users attempt to access a communication channel at the same time, a conflict results and none of the communications are successful. Contention resolution is the study of distributed transmission and retransmission protocols designed to maximize notions of utility such as channel utilization in the face of blocking communications.

An additional issue to be considered in the design of such protocols is that selfish users may have incentive to deviate from the prescribed behavior, if another transmission strategy increases their utility. The work of Fiat et al. (in SODA'07, pp.179188, SIAM, Philadelphia 2007) addresses this issue by constructing an asymptotically optimal incentive-compatible protocol. However, their protocol assumes the cost of any single transmission is zero, and the protocol completely collapses under non-zero transmission costs.

In this paper we treat the case of non-zero transmission cost c.We present asymptotically optimal contention resolution protocols that are robust to selfish users, in two different channel feedback models. Our main result is in the Collision Multiplicity Feedback model, where after each time slot, the number of attempted transmissions is returned as feedback to the users. In this setting, we give a protocol that has expected cost Θ(n+c·log n) and is in o(1)-equilibrium, where n is the number of users.  


References:
George Christodoulou, Katrina Ligett, Evangelia Pyrga, Contention Resolution under Selfishness, Algorithmica (2014) 70:675693
11.03.2015
Piotr Danilewski Universität des Saarlandes
Informatyka Teoretyczna
AnyDSL - a host for any language
In a multi-domain project, there is no single programming language that can be used to program everything. Instead, a combination of general-purpose and domain-specific languages (DSLs) are used. Unfortunately, many domains lack a good, representative DSL, due to domain diversity and difficulty of creating a new compiler. Moreover, the communication across the languages is limited, often requiring the data to be serialized, and reducing the quality of optimization and compile-time verification.

In our talk we present our approach to solve these problems, by introducing a new metamorphic language - AnyDSL. The parsing and execution of AnyDSL can be interleaved and the latter can influence the former - e.g. by introducing a new grammar with which parsing should resume. Regardless of the language the source is written in, all code is translated into a low-level, functional representation in continuous passing style (AIR). AIR serves as a "meeting point", permitting inter-DSL communication. AIR can be interpreted or compiled to LLVM bytecode and then to machine code.

New grammars are defined also within AnyDSL. Unlike typical parser generators, grammar productions and actions are given as functions. AIR supports dynamic staging - a flexible way to define partial evaluation strategies. With it the overhead of having multiple layers of languages can be resolved early. It also allows the DSL designer to specify domain specific optimizations. After all those transformations, AIR can be compiled to machine code that is efficient, with performance comparable to programs written in general-purpose languages.

In our talk we present a new metamorphic language - AnyDSL. AnyDSL permits the native parser to be exchanged with a custom DSL. Regardless of the DSL however, all code is translated into a low-level, functional representation in continuous passing style (AIR).

New grammars are defined also within AnyDSL, but unlike typical parser generators, grammar productions and actions are given as functions. AIR supports dynamic staging - a flexible way to define partial evaluation strategies to resolve overheads and define domain specific optimizations. AIR can be compiled to machine code, and produced programs have performance comparable to those produced by general-purpose languages.  

14.01.2015
Andrzej Dorobisz
Informatyka Teoretyczna
Scheduling parallel jobs to minimize the makespan
We consider the NP-hard problem of scheduling parallel jobs with release dates on identical parallel machines to minimize the makespan. A parallel job requires simultaneously a prespecified, job-dependent number of machines when being processed. We prove that the makespan of any nonpreemptive list-schedule is within a factor of 2 of the optimal preemptive makespan. This gives the best-known approximation algorithms for both the preemptive and the nonpreemptive variant of the problem. We also show that no list-scheduling algorithm can achieve a better performance guarantee than 2 for the nonpreemptive problem, no matter which priority list is chosen.

List-scheduling also works in the online setting where jobs arrive over time and the length of a job becomes known only when it completes; it therefore yields a deterministic online algorithm with competitive ratio 2 as well. In addition, we consider a different online model in which jobs arrive one by one and need to be scheduled before the next job becomes known. We show that no list-scheduling algorithm has a constant competitive ratio. Still, we present the first online algorithm for scheduling parallel jobs with a constant competitive ratio in this context. We also prove a new information-theoretic lower bound of 2.25 for the competitive ratio of any deterministic online algorithm for this model. Moreover, we show that 6/5 is a lower bound for the competitive ratio of any deterministic online algorithm of the preemptive version of the model jobs arriving over time.  


References:
Johannes Berit, Scheduling parallel jobs to minimize the makespan, J of Schedulling, 9(2006), 433–452
23.04.2014
Maciej Gazda Eindhoven University of Technology
Podstawy Informatyki
Zielonka's Recursive Algorithm for Parity Games
Parity games are infinite duration, two player games played on a finite directed graph. Vertices of the graph are labelled with natural numbers (called priorities) and the winning condition is determined by the parity of the most significant (typically maximal) priority encountered inifnitely often. The games are memoryless determined, moreover, the problem of finding the winning partition of a given game belongs to both NP and coNP complexity classes. On the other hand, no polynomial algorithm solving parity games has been found (the best one due to Jurdziński, Paterson and Zwick has subexponential running time with sqrt(n) in the exponent). In my talk, I will give a brief introduction to this intriguing computational problem, and then focus on one of the earliest and simplest solving algorithms, namely Zielonka's recursive algorithm. Even though its worst-case running time is not particularly impressive as compared to more sophisticated solvers, the experimental study of Friedmann and Lange has shown that in practice it works very well. In order to understand why it is the case, in our recent work with Tim Willemse we have analysed the performance of two variants of the algorithm (standard and optimised) on certain subclasses of parity games (dull, weak, and solitaire). Moreover, we have provided a tighter lower bound on its worst-case running time.  
18.12.2013
Bartłomiej Ryniec
Informatyka Teoretyczna
Preprocess, Set, Query!
Thorup and Zwick (J.ACM 52(1):1–24, 2005 and STOC'01) in their seminal work introduced the notion of distance oracles. Given an n-vertex weighted undirected graph with m edges, they show that for any integer k ≥ 1 it is possible to preprocess the graph in O˜(m n^{1/k}) time and generate a compact data structure of size O(k n^{1+1/k}). For each pair of vertices, it is then possible to retrieve an estimated distance with multiplicative stretch 2k−1 in O(k) time. For k=2 this gives an oracle of O(n^{1.5}) size that produces in constant time estimated distances with stretch 3.
Recently, Patrascu and Roditty (In: Proc. of 51st FOCS, 2010) broke the theoretical status-quo in the field of distance oracles and obtained a distance oracle for sparse unweighted graphs of O(n^{5/3}) size that produces in constant time estimated distances with stretch 2.
In this paper we show that it is possible to break the stretch 2 barrier at the price of non-constant query time in unweighted undirected graphs.We present a data structure that produces estimated distances with 1+ε stretch. The size of the data structure is O(n m^{1−ε'}) and the query time is O˜(m^{1−ε'}). Using it for sparse unweighted graphs we can get a data structure of size O(n^{1.87}) that can supply in O(n^{0.87}) time estimated distances with multiplicative stretch 1.75.  
References:
Ely Porat, Liam Roditty, Preprocess, Set, Query! Algorithmica (2013) 67:516–528
30.10.2013
Igor Adamski
Informatyka Teoretyczna
Linked Dynamic Tries with Applications to LZ-Compression in Sublinear Time and Space
The dynamic trie is a fundamental data structure with applications in many areas of computer science. This paper proposes a new technique for maintaining a dynamic trie T of size at most 2^w nodes under the unit-cost RAM model with a fixed word size w. It is based on the idea of partitioning T into a set of linked small tries, each of which can be maintained efficiently. Our method is not only space-efficient, but also allows the longest common prefix between any query pattern P and the strings currently stored in T to be computed in o(|P|) time for small alphabets, and allows any leaf to be inserted into or deleted from T in o(log|T|) time. To demonstrate the usefulness of our new data structure, we apply it to LZ-compression. Significantly, we obtain the first algorithm for generating the LZ78 encoding of a given string of length n over an alphabet of size σ in sublinear (o(n)) time and sublinear (o(n log σ) bits) working space for small alphabets
(σ = 2^{o(log n \cdot \frac{log log log n}{(log log n)^2})).
Moreover, the working space for our new algorithm is asymptotically less than or equal to the space for storing the output compressed text, regardless of the alphabet size.  
References:
Jesper Jansson, Kunihiko Sadakane, Wing-Kin Sung, Linked Dynamic Tries with Applications to LZ-Compression in Sublinear Time and Space, Algorithmica DOI 10.1007/s00453-013-9836-6
14.10.2013
W. Hugh WoodinUC Berkeley
Informatyka Teoretyczna
The Continuum Hypothesis and the search for Mathematical Infinitynew place and date: Oct 14, 2013, 16:00,Polska Akademia Umiejętności, Kraków, Sławkowska 17
By middle of the 20th century the problem of the Continuum Hypothesis was widely regarded as one of the prominent problems in all of Mathematics. Remarkably, this question cannot be solved on the basis of the basic principles (these are the ZFC axioms) on which the entire subject is based. This discovery of Cohen, 50 years ago this summer, is arguably one of the major discoveries in the history of modern Mathematics.

But does this mean that the question of the Continuum Hypothesis has no answer? Any "solution" must involve the adoption of new principles but which principles should one adopt? Alternatively, perhaps the correct assessment of Cohen's discovery is that the entire enterprise of the mathematical study of Infinity is ultimately doomed because the entire subject is simply a human fiction without any true foundation. In this case, any "solution" to the Continuum Hypothesis is just an arbitrary (human) choice.

Over the last few years a scenario has emerged by which with the addition of a single new principle not only can the problem of the Continuum Hypothesis be resolved, but so can all of the other problems  which Cohen's method has been used to show are also unsolvable (and there have been many such problems).  Moreover the extension of the basic (ZFC) principles by this new principle would be seen as an absolutely compelling option based on the fundamental intuitions on which the entire mathematical conception of Infinity is founded.

However, this scenario critically depends upon the outcome of a single conjecture. If this conjecture is false then the entire approach, which is the culmination of nearly 50 years of research, fails or at the very least  has to be significantly revised.

Thus the mathematical study of Infinity has arguably reached a tipping point. But which way will it tip?  

08.05.2013
Sebastian Syta
Informatyka Teoretyczna
A Sublinear Time Algorithm for PageRank Computations
In a network, identifying all vertices whose PageRank is more than a given threshold value $\Delta$ is a basic problem that has arisen in Web and social network analyses. In this paper, we develop a nearly optimal, sublinear time, randomized algorithm for a close variant of this problem. When given a directed network \graph, a threshold value $\Delta$, and a positive constant $c>3$, with probability $1-o(1)$, our algorithm will return a subset $S\subseteq V$ with the property that $S$ contains all vertices of PageRank at least $\Delta$ and no vertex with PageRank less than $\Delta/c$. The running time of our algorithm is always $\tilde{O}(\frac{n}{\Delta})$. In addition, our algorithm can be efficiently implemented in various network access models including the Jump and Crawl query model recently studied by \cite{brautbar_kearns10}, making it suitable for dealing with large social and information networks. As part of our analysis, we show that any algorithm for solving this problem must have expected time complexity of ${\Omega}(\frac{n}{\Delta})$. Thus, our algorithm is optimal up to logarithmic factors. Our algorithm (for identifying vertices with significant PageRank) applies a multi-scale sampling scheme that uses a fast personalized PageRank estimator as its main subroutine. For that, we develop a new local randomized algorithm for approximating personalized PageRank which is more robust than the earlier ones developed by Jeh and Widom \cite{JehW03} and by Andersen, Chung, and Lang \cite{AndersenCL06}.  
References:
Christian Borgs, Michael Brautbar, Jennifer Chayes1, and Shang-Hua Teng, A Sublinear Time Algorithm for PageRank Computations,
17.04.2013
Tomasz Kołodziejski
Informatyka Teoretyczna
8/5 Approximation for TSP Paths
We prove the approximation ratio 8/5 for the metric s-t-Path-TSP problem, and more generally for shortest connected T-joins. The algorithm that achieves this ratio is the simple ``Best of Many'' version of Christofides' algorithm (1976), suggested by An, Kleinberg and Shmoys (2012), which consists in determining the best Christofides s-t-tour out of those constructed from a family Fscr_{>0} of trees having a convex combination dominated by an optimal solution x^* of the fractional relaxation. They give the approximation guarantee sqrt{5}+1/2 for such an s-t--tour, which is the first improvement after the 5/3 guarantee of Hoogeveen's Christofides type algorithm (1991). Cheriyan, Friggstad and Gao (2012) extended this result to a 13/8-approximation of shortest connected T-joins, for |T|≥4.

The ratio 8/5 is proved by simplifying and improving the approach of An, Kleinberg and Shmoys that consists in completing x^*/2 in order to dominate the cost of "parity correction" for spanning trees. We partition the edge-set of each spanning tree in Fscr_{>0} into an s-t--path (or more generally, into a T-join) and its complement, which induces a decomposition of x^*. This decomposition can be refined and then efficiently used to complete x^*/2 without using linear programming or particular properties of T, but by adding to each cut deficient for x^*/2 an individually tailored explicitly given vector, inherent in the problem.

A simple example shows that the Best of Many Christofides algorithm may not find a shorter s-t--tour than 3/2 times the incidentally common optima of the problem and of its fractional relaxation.  


References:
András Sebö, Eight-Fifth Approximation for TSP Paths, Integer Programming and Combinatorial Optimization, LNCS7801, 2013, pp 362-374
09.01.2013
Jakub Adamek
Informatyka Teoretyczna
A Distributed O(1)-Approximation Algorithm for the Uniform Facility Location Problem
We investigate a metric facility location problem in a distributed setting. In this problem, we assume that each point is a client as well as a potential location for a facility and that the opening costs for the facilities and the demands of the clients are uniform. The goal is to open a subset of the input points as facilities such that the accumulated cost for the whole point set, consisting of the opening costs for the facilities and the connection costs for the clients, is minimized.
We present a randomized distributed algorithm that computes in expectation an O(1)-approximate solution to the metric facility location problem described above. Our algorithm works in a synchronous message passing model, where each point is an autonomous computational entity that has its own local memory and that communicates with the other entities by message passing.We assume that each entity knows the distance to all the other entities, but does not know any of the other pairwise distances. Our algorithm uses three rounds of all-to-all communication with message sizes bounded to O(log(n)) bits, where n is the number of input points.
We extend our distributed algorithm to constant powers of metric spaces. For a metric exponent l≥1, we obtain a randomized O(1)-approximation algorithm that uses three rounds of all-to-all communication with message sizes bounded to O(log(n)) bits.  
References:
Joachim Gehweiler, Christiane Lammersen, Christian Sohler, A Distributed O(1)-Approximation Algorithm for the Uniform Facility Location Problem, Algorithmica DOI 10.1007/s00453-012-9690-y
19.12.2012
Radoslaw Smyrek
Informatyka Teoretyczna
Recognizing d-Interval Graphs and d-Track Interval Graphs
A d-interval is the union of d disjoint intervals on the real line. A d-track interval is the union of d disjoint intervals on d disjoint parallel lines called tracks, one interval on each track. As generalizations of the ubiquitous interval graphs, d-interval graphs and d-track interval graphs have wide applications, traditionally to scheduling and resource allocation, and more recently to bioinformatics. In this paper, we prove that recognizing d-track interval graphs is NP-complete for any constant d≥2. This confirms a conjecture of Gyárfás and West in 1995. Previously only the complexity of the case d=2 was known. Our proof in fact implies that several restricted variants of this graph recognition problem, i.e., recognizing balanced d-track interval graphs, unit d-track interval graphs, and (2,..., 2) d-track interval graphs, are all NP-complete. This partially answers another question recently raised by Gambette and Vialette. We also prove that recognizing depth-two 2-track interval graphs is NP-complete, even for the unit case. In sharp contrast, we present a simple linear-time algorithm for recognizing depth-two unit d-interval graphs. These and other results of ours give partial answers to a question of West and Shmoys in 1984 and a similar question of Gyárfás and West in 1995. Finally, we give the first bounds on the track number and the unit track number of a graph in terms of the number of vertices, the number of edges, and the maximum degree, and link the two numbers to the classical concepts of arboricity.  
References:
Minghui Jiang: Recognizing d-Interval Graphs and d-Track Interval Graphs, Algorithmica DOI 10.1007/s00453-012-9651-5
05.12.2012
Michał Masłowski
Informatyka Teoretyczna
On the Exact Complexity of Evaluating Quantified k-CNF
We relate the exponential complexities 2^{s(k)n} of k-SAT and the exponential complexity 2^{s(EVAL(Π_2 3-CNF))n} of EVAL(Π_2 3-CNF) (the problem of evaluating quantified formulas of the form ∀x∃yF(x,y) where F is a 3-CNF in x-variables and y-variables) and show that s(∞) (the limit of s(k) as k→∞) is at most s(EVAL(Π_2 3-CNF)). Therefore, if we assume the Strong Exponential-Time Hypothesis, then there is no algorithm for EVAL(Π_2 3-CNF) running in time 2^{cn} with c<1.
On the other hand, a nontrivial exponential-time algorithm for EVAL(Π_2 3-CNF) would provide a k-SAT solver with better exponent than all current algorithms for sufficiently large k. We also show several syntactic restrictions of the evaluation problem EVAL(Π_2 3-CNF) have nontrivial algorithms, and provide strong evidence that the hardest cases of EVAL(Π_2 3-CNF) must have a mixture of clauses of two types: one universally quantified literal and two existentially quantified literals, or only existentially quantified literals. Moreover, the hardest cases must have at least n−o(n) universally quantified variables, and hence only o(n) existentially quantified variables. Our proofs involve the construction of efficient minimally unsatisfiable k-CNFs and the application of the Sparsification lemma.  
References:
Chris Calabro, Russell Impagliazzo, Ramamohan Paturi, On the Exact Complexity of Evaluating Quantified k-CNF, Algorithmica DOI 10.1007/s00453-012-9648-0
16.05.2012
Kasper Kopeć
Informatyka Teoretyczna
Minimum Weight Cycles and Triangles: Equivalences and Algorithms
We consider the fundamental algorithmic problem of finding a cycle of minimum weight in a weighted graph. In particular, we show that the minimum weight cycle problem in an undirected n-node graph with edge weights in {1,...,M} or in a directed n-node graph with edge weights in {-M,..., M} and no negative cycles can be efficiently reduced to finding a minimum weight triangle in an Theta(n)-node undirected graph with weights in {1,...,O(M)}. Roughly speaking, our reductions imply the following surprising phenomenon: a minimum cycle with an arbitrary number of weighted edges can be "encoded" using only three edges within roughly the same weight interval! This resolves a longstanding open problem posed by Itai and Rodeh [SIAM J. Computing 1978 and STOC'77].
A direct consequence of our efficient reductions are O (Mn^{omega})-time algorithms using fast matrix multiplication (FMM) for finding a minimum weight cycle in both undirected graphs with integral weights from the interval [1,M] and directed graphs with integral weights from the interval [-M,M]. The latter seems to reveal a strong separation between the all pairs shortest paths (APSP) problem and the minimum weight cycle problem in directed graphs as the fastest known APSP algorithm has a running time of O(M^{0.681}n^{2.575}) by Zwick [J. ACM 2002].
> In contrast, when only combinatorial algorithms are allowed (that is, without FMM) the only known solution to minimum weight cycle is by computing APSP. Interestingly, any separation between the two problems in this case would be an amazing breakthrough as by a recent paper by Vassilevska W. and Williams [FOCS'10], any O(n^{3-eps})-time algorithm (eps>0) for minimum weight cycle immediately implies a O(n^{3-delta})-time algorithm (delta>0) for APSP.
 
References:
Liam Roditty and Virginia Vassilevska Williams, Minimum Weight Cycles and Triangles: Equivalences and Algorithms, http://arxiv.org/abs/1104.2882v1
07.03.2012
Kamil Kraszewski
Informatyka Teoretyczna
New Lower Bound on the Maximum Number of Satisfied Clauses in Max-SAT and Its Algorithmic Applications
A pair of unit clauses is called conflicting if it is of the form (x), (¯x). A CNF formula is unit-conflict free (UCF) if it contains no pair of conflicting unit clauses. Lieberherr and Specker (J. ACM 28:411–421, 1981) showed that for each UCF CNF formula with m clauses we can simultaneously satisfy at least ϕ'm clauses, where ϕ'=(√5−1)/2. We improve the Lieberherr-Specker bound by showing that for each UCF CNF formula F with m clauses we can find, in polynomial time, a subformula F' with m' clauses such that we can simultaneously satisfy at least ϕ'm+(1−ϕ')m'+(2−3ϕ')n"/2 clauses (in F), where n"cis the number of variables in F which are not in F'.We consider two parameterized versions of MAX-SAT, where the parameter is the number of satisfied clauses above the bounds m/2 and m(√5−1)/2. The former bound is tight for general formulas, and the later is tight for UCF formulas. Mahajan and Raman (J. Algorithms 31:335–354, 1999) showed that every instance of the first parameterized problem can be transformed, in polynomial time, into an equivalent one with at most 6k+3 variables and 10k clauses.We improve this to 4k variables and (2√5+4)k clauses. Mahajan and Raman conjectured that the second parameterized problem is fixed-parameter tractable (FPT). We show that the problem is indeed FPT by describing a polynomial-time algorithm that transforms any problem instance into an equivalent one with at most (7+3√5)k variables. Our results are obtained using our improvement of the Lieberherr-Specker bound above.  
References:
Robert Crowston, Gregory Gutin, Mark Jones and Anders Yeo, A New Lower Bound on the Maximum Number of Satisfied Clauses in Max-SAT and Its Algorithmic Applications, Algorithmica DOI 10.1007/s00453-011-9550-1
11.01.2012
Paweł Wanat
Informatyka Teoretyczna
Exact Algorithms for Edge Domination
An edge dominating set in a graph G=(V,E) is a subset of the edges D⊆E such that every edge in E is adjacent or equal to some edge in D. The problem of finding an edge dominating set of minimum cardinality is NP-hard. We present a faster exact exponential time algorithm for this problem. Our algorithm uses O(1.3226^n) time and polynomial space. The algorithm combines an enumeration approach of minimal vertex covers in the input graph with the branch and reduce paradigm. Its time bound is obtained using the measure and conquer technique. The algorithm is obtained by starting with a slower algorithm which is refined stepwisely. In each of these refinement steps, the worst cases in the measure and conquer analysis of the current algorithm are reconsidered and a new branching strategy is proposed on one of these worst cases. In this way a series of algorithms appears, each one slightly faster than the previous one, ending in the O(1.3226^n) time algorithm. For each algorithm in the series, we also give a lower bound on its running time. We also show that the related problems: minimum weight edge dominating set, minimum maximal matching and minimum weight maximal matching can be solved in O(1.3226^n) time and polynomial space using modifications of the algorithm for edge dominating set. In addition, we consider the matrix dominating set problem which we solve in O(1.3226^{n+m}) time and polynomial space for n×m matrices, and the parametrised minimum weight maximal matching problem for which we obtain an O∗(2.4179^k) time and space algorithm.  
References:
Johan M.M. van Rooij, Hans L. Bodlaender, Exact Algorithms for Edge Domination, Algorithmica, DOI 10.1007/s00453-011-9546-x
07.12.2011
Wojciech Bukowicki
Informatyka Teoretyczna
Bipartite Matching in the Semi-streaming Model
We present the first deterministic 1+ε approximation algorithm for finding a large matching in a bipartite graph in the semi-streaming model which requires only O((1/ε)^5) passes over the input stream. In this model, the input graph G=(V,E) is given as a stream of its edges in some arbitrary order, and storage of the algorithm is bounded by O(n polylog n) bits, where n=|V|. The only previously known arbitrarily good approximation for general graphs is achieved by the randomized algorithm of McGregor (Proceedings of the International Workshop on Approximation Algorithms for Combinatorial Optimization Problems and Randomization and Computation, Berkeley, CA, USA, pp. 170–181, 2005), which uses Ω((1/ε)^{1/ε}) passes. We show that even for bipartite graphs, McGregor's algorithm needs Ω(1/ε)^{Ω(1/ε)} passes, thus it is necessarily exponential in the approximation parameter. The design as well as the analysis of our algorithm require the introduction of some new techniques. A novelty of our algorithm is a new deterministic assignment of matching edges to augmenting paths which is responsible for the complexity reduction, and gets rid of randomization.

We repeatedly grow an initial matching using augmenting paths up to a length of 2k+1 for k=2/ε. We terminate when the number of augmenting paths found in one iteration falls below a certain threshold also depending on k, that guarantees a 1+ε approximation. The main challenge is to find those augmenting paths without requiring an excessive number of passes. In each iteration, using multiple passes, we grow a set of alternating paths in parallel, considering each edge as a possible extension as it comes along in the stream. Backtracking is used on paths that fail to grow any further. Crucial are the so-called position limits: when a matching edge is the i-th matching edge in a path and it is then removed by backtracking, it will only be inserted into a path again at a position strictly lesser than i. This rule strikes a balance between terminating quickly on the one hand and giving the procedure enough freedom on the other hand.  


References:
Sebastian Eggert, Lasse Kliemann, Peter Munstermann, Anand Srivastav, Bipartite Matching in the Semi-streaming Model, Algorithmica, DOI 10.1007/s00453-011-9556-8
07.12.2011
Michał Handzlik
Podstawy Informatyki
SOME IMPROVEMENTS TO TURNER'S ALGORITHM FOR BRACKET ABSTRACTION by M. Bunder
A computer handles lambda terms more easily if these are translated into combinatory terms. This translation process is called bracket abstraction. The simplest abstraction algorithm-the (fab) algorithm of Curry (see Curry and Feys [6])-is lengthy to implement and produces combinatory terms that increase rapidly in length as the number of variables to be abstracted increases.

A measure of the efficiency of an abstraction algorithm was first introduced by Kennaway as an upper bound of the length of the obtained combinatory term, as a function of the length of the original term and the number of variables to be abstracted. Mulder used these methods and experiments with many special cases, to compare the efficiency of the main algorithms listed above. The algorithm of Statman came out as the most efficient in the limiting case, but showed up as almost the worst in a number of reasonably simple special cases. Turner's algorithm was generally the best in these cases and was in fact Mulder's choice overall. In this paper, firstly we note that what Turner describes as "the improved algorithm of Curry", on which his own is based, is in fact not equivalent to any of Curry's algorithms. Turner's abstracts lack a basic property possessed by all of Curry's as well as many others. Secondly we give methods whereby Turner's algorithm (as well as others) can be more efficiently implemented, while providing simpler abstracts.  

30.11.2011
Michał Feret
Informatyka Teoretyczna
Guard games on graphs
A team of mobile agents, called guards, tries to keep an intruder out of an assigned area by blocking all possible attacks. In a graph model for this setting, the guards and the intruder are located on the vertices of a graph, and they move from node to node via connecting edges. The area protected by the guards is an induced subgraph of the given graph. We investigate the algorithmic aspects of the guarding problem, which is to find the minimum number of guards sufficient to patrol the area. We show that the guarding problem is PSPACE-hard and provide a set of approximation algorithms. All approximation algorithms are based on the study of a variant of the game where the intruder must reach the guarded area in a single step in order to win. This variant of the game appears to be a 2-approximation for the guarding problem, and for graphs without cycles of length 5 the minimum number of required guards in both games coincides. We give a polynomial time algorithm for solving the one-step guarding problem in graphs of bounded treewidth, and complement this result by showing that the problem is W[1]-hard parameterized by the treewidth of the input graph. We also show that the problem is fixed parameter tractable (FPT) parameterized by the treewidth and maximum degree of the input graph. Finally, we turn our attention to a large class of sparse graphs, including planar graphs and graphs of bounded genus, namely apex-minor-free graphs. We prove that the one-step guarding problem is FPT and possess a PTAS on apex-minor-free graphs.  
References:
Fedor V. Fomin, Petr A. Golovach, Daniel Lokshtanov, Guard games on graphs: Keep the intruder out! , Theoretical Computer Science 412 (2011), 6484–6497
23.11.2011
Albert Łącki
Informatyka Teoretyczna
Almost Exact Matchings
In the exact matching problem we are given a graph G, some of whose edges are colored red, and a positive integer k. The goal is to determine if G has a perfect matching, exactly k edges of which are red. More generally if the matching number of G is m = m(G), the goal is to find a matching with m edges, exactly k edges of which are red, or determine that no such matching exists. This problem is one of the few remaining problems that have efficient randomized algorithms (in fact, this problem is in RNC), but for which no polynomial time deterministic algorithm is known. The first result shows that, in a sense, this problem is as close to being in P as one can get. We give a polynomial time deterministic algorithm that either correctly decides that no maximum matching has exactly k red edges, or exhibits a matching with m(G)−1 edges having exactly k red edges. Hence, the additive error is one. We also present an efficient algorithm for the exact matching problem in families of graphs for which this problem is known to be tractable.We show how to count the number of exact perfect matchings in K_{3,3}-minor free graphs (these include all planar graphs as well as many others) in O(n^{3.19}) worst case time. Our algorithm can also count the number of perfect matchings in K_{3,3}-minor free graphs in O(n^{2.19}) time.  
References:
Raphael Yuster, Almost Exact Matchings, Algorithmica, DOI 10.1007/s00453-011-9519-0
23.03.2011
Maciej Wawro
Informatyka Teoretyczna
Weighted Sum Coloring in Batch Scheduling of Conflicting Jobs
Motivated by applications in batch scheduling of jobs in manufacturing systems and distributed computing, we study two related problems. Given is a set of jobs {J1, . . . , Jn}, where Jj has the processing time pj , and an undirected intersection graph G = ({1, 2, . . . , n}, E), with an edge (i, j) whenever the pair of jobs Ji and Jj cannot be processed in the same batch. We are to schedule the jobs in batches, where each batch completes its processing when the last job in the batch completes execution. The goal is to minimize the sum of job completion times. Our two problems differ in the definition of completion time of a job within a given batch. In the first variant, a job completes its execution when its batch is completed, whereas in the second variant, a job completes execution when its own processing is completed.

For the first variant, we show that an adaptation of the greedy set cover algorithm gives a 4-approximation for perfect graphs. For the second variant, we give new or improved approximations for a number of different classes of graphs. The algorithms are of widely different genres (LP, greedy, subgraph covering), yet they curiously share a common feature in their use of randomized geometric partitioning.  


References:
Leah Epstein, Magnus M. Halldórsson, Asaf Levin, Hadas Shachnai, Weighted Sum Coloring in Batch Scheduling of Conflicting Jobs , Algorithmica 55(2009) 643-665
13.01.2010
06.01.2010,Grzegorz Matecki
Informatyka Teoretyczna
On-line matching on bipartite graphs
We consider bipartite matching in the on-line version as follows. There is a bipartite graph G=(U,V,E), in which vertices in V are given a priori and each vertex u in U arrives in the on-line fashion (with all its incident edges). An on-line algorithm matches each such vertex u to a previously unmatched adjacent vertex in V, if there is one. Decisions made by the algorithm are irrevocable. The objective is to maximize the size of the resulting matching. It is easy to observe that any greedy algorithm (never leave vertex u unmatched if a match is possible) matches at least n/2 edges where n is the size of the optimal matching with G given at once. This number is optimal and there is no better algorithm.

We propose the following modification of an on-line matching. The algorithm matches each incoming vertex u in U to a set S(u) of adjacent vertices in V (instead of one vertex). In case when S(u) and S(x) for already existing x in U are not disjoint the algorithm must remove all common vertices from S(x). Additionally, the algorithm has to obey the rule: each S(x) must not become empty if only it was initialized with a nonempty set of vertices. An algorithm satisfying the above condition is called adaptive. In this approach a vertex u in U can be always matched to a vertex from S(u) (if S(u) is not empty). Therefore, the number of matched edges is equal to the number of nonempty sets S(u).

We are going to present the optimal adaptive on-line algorithm which breaks n/2 barrier and matches at least 0.589n+o(n) edges.  

25.11.2009
Piotr Faliszewski
Podstawy Informatyki
Distance Rationalizability of Voting Rules
A voting rule is an algorithm for determining the winner in an election. One can easily come up with many different voting rules, but it is also important to justify why a given rule is natural/useful. There are several approaches that have been used to justify the proposed rules. One justification is to show that a rule satisfies a set of desirable axioms that uniquely identify it. Another is to show that the calculation that it performs is actually maximum likelihood estimation relative to a certain model of noise that affects voters (MLE approach). The third approach, which has been recently actively investigated, is the so-called distance rationalizability framework. In it, a voting rule is defined via a class of consensus elections (i.e., a class of elections that have a clear winner) and a distance function. A candidate c is a winner of an election E if c wins in one of the consensus elections that are closest to E relative to the given distance. In this talk, we show that essentially any voting rule is distance-rationalizable if we do not restrict the two ingredients of the rule: the consensus class and the distance. Thus distance rationalizability of a rule does not by itself guarantee that the voting rule has any desirable properties. However, we demonstrate that the distance used to rationalize a given rule may provide useful information about this rule's behavior. Specifically, we identify a large class of distances, which we call votewise distances, and show that if a rule is rationalized via a distance from this class, many important properties of this rule can be easily expressed in terms of the underlying distance. This enables us to provide a new characterization of scoring rules and to establish a connection with the MLE framework. We also give bounds on the computational complexity of the winner determination problem for distance-rationalizable rules.  
13.05.2009
Andrzej Kukier
Informatyka Teoretyczna
Similarity Search in High Dimensions via Hashing
The nearest- or near-neighor query problems arise in a large variety of database applications, usually in the context of similarity searching. Of late, there has been increasing interest in building search/index structures for performing similarity search over high-dimensional data, e.g. image databases, document collections, time-series databases and genome databases. Unfortunately, all known techniques for solving this problem fall prey of the "curse of dimensionality". That is, the data structures scale poorly with data dimensionality: in fact, if the number of dimensions exceeds 10 to 20, searching in k-d trees and related data structures involves the inspection of a large fraction of the database, therby doing no better than brute-force linear search. It has been suggested that since the selection of features and the choice of a distance metric in typical applications is rather heuristic, determinig an approximate nearest neighbor should suffice for most practical purposes. In this paper, we examine a novel scheme for approximate similarity search bases on hashing. The basic idea is to hash the points for the database so as to ensure that the probability of a collision is much higher for objects that are close to each other than for those that are far apart. We provide experimental evidence that our method gives significant improvement in running time over other methods for searching in high-dimensional spaces based on hierachical tree decomposition. Experimental results also indicate that our scheme scales well even for a relatively large number of dimensions (more than 50).  
25.03.2009
Apoloniusz Tyszka
Informatyka Teoretyczna
A hypothetical upper bound for solutions of a Diophantine equation with a finite number of solutions

Let E_n be the set of all equations of the form
x_i = 1, or
x_i + x_j = x_k, or
x_i * x_j = x_k,
where i,j,k range over {1,...,n}. Moreover let K be one of the rings Z,Q,R,C. We construct a system S of equations from E_{21} such that S has infinitely many integer solutions and S has no integer solution in the cube [-2^{2^{21-1}},2^{2^{21-1}}]^{21}. We conjecture that if a system S, contained in E_n, has a finite number of solutions in K, then each such solution (x_1,...,x_n) satisfies (|x_1|,...,|x_n|) \in [0,2^{2^{n-1}}]^n. Applying this conjecture for K=Z, we prove that if a Diophantine equation has only finitely many integer (rational) solutions, then these solutions can be algorithmically found. On the other hand, an affirmative answer to the famous open problem whether each listable subset M of Z^n has a finite-fold Diophantine representation would falsify our conjecture for K=Z.

Full text: http://arxiv.org/abs/0901.2093  
References:
A. Kozlowski and A. Tyszka, A Conjecture of Apoloniusz Tyszka on the Addition of Rational Numbers, 2008,
Yu. Matiyasevich, Hilbert's tenth problem: what was done and what is to be done. Hilbert's tenth problem: relations with arithmetic and algebraic geometry (Ghent, 1999), 1-47, Contemp. Math. 270, Amer. Math. Soc., Providence, RI, 2000
A. Tyszka, A system of equations, SIAM Problems and Solutions (electronic only), Problem 07-006, 2007,
A. Tyszka, Some conjectures on addition and multiplication of complex (real) numbers, Int. Math. Forum 4 (2009), no. 9-12, 521-530,
A. Tyszka, Bounds of some real (complex) solution of a finite system of polynomial equations with rational coefficients

07.05.2008
Bożena Woźna, Akademia im. Jana Długosza, Częstochowa
Podstawy Informatyki
Ograniczona Weryfikacja Modelowa dla systemów wieloagentowych i systemów z czasem
W referacie przedstawię wyniki moich badań w zakresie automatycznej weryfikacji modelowej systemów czasu rzeczywistego oraz systemów wieloagentowych. Wyniki te zostały osiągnięte we współpracy z prof. Wojciechem Penczkiem (IPI PAN Warszawa), dr Andrzejem Zbrzeznym (AJD, Częstochowa) oraz dr Alessio Lomuscio (UCL, Londyn). W szczególności opowiem o zaproponowanej przeze mnie Ograniczonej Weryfikacji Modelowej, którą zastosowałam zarówno do systemów z czasem jak i systemów wieloagentowych. Zrobię również krótkie wprowadzenie do formalizmów stosowanych w automatycznej weryfikacji modelowej wy?ej wymienionych systemów.

System czasu rzeczywistego to (zgodnie z definicją IEEE) system, którego poprawność działania zależy nie tylko od poprawności logicznych rezultatów, lecz również od czasu, w jakim te rezultaty są osiągane. Systemy czasu rzeczywistego znajdują zastosowanie między innymi w przemyśle do nadzorowania procesów technologicznych, przy implementacji protokołów komunikacyjnych, w planowaniu i kontroli ruchu lotniczego, itd.

Agent to jednostka, która działa w pewnym ustalonym środowisku, jest zdolna do komunikowania się, monitorowania swego otoczenia i podejmowania autonomicznych decyzji. System wieloagentowy to sieć komunikujących się i współpracujących między sobą agentów, realizujących zarówno wspólne jak i prywatne cele. Systemy wieloagentowe mają już swoją ugruntowaną pozycję w wielu dziedzinach związanych z technologią informacji, np.: w inżynierii oprogramowania, e-handlu, sieciach telekomunikacyjnych, automatycznym wnioskowaniu i argumentacji, wspomaganiu zarządzaniem w przedsiębiorstwie, itd. Weryfikacja modelowa jest jedną z najbardziej rozpowszechnionych metod automatycznej weryfikacji poprawności systemów czasu rzeczywistego oraz systemów wieloagentowych. Pierwsze prace na ten temat ukazały się w 1981 roku i od tamtego czasu trwa nieustanny rozwój narzędzi wykorzystujących udoskonalane algorytmy. Różnorodność dostępnych podejść, jak też rozwiązań, jest wynikiem istnienia wielu modeli dla wyżej wymienionych systemów, np. przeplotowych i nieprzeplotowych, jak też wielu metod opisu własności tych systemów, np. poprzez automaty, algebry procesów lub logiki temporalne. Istotny postp w dziedzinie weryfikacji dokonał się w 1990 roku po opracowaniu metod bazujących na obliczeniach symbolicznych, wykorzystujących formalizm Boolowskich Diagramów Decyzyjnych. Następny krok do przodu został wykonany w 1999 roku po sprowadzeniu problemu weryfikacji modelowej do problemu testowania spełnialności dla formuł zdaniowych i wykorzystaniu efektywnych algorytmów dla tego ostatniego problemu.  
28.11.2007
Thierry Joly
Podstawy Informatyki
Undeciding lambda-definability again
The Definability Problem (DP for short) is the question whether a given functional of some hereditarily finite type structure over a single atomic type is the interpretation of a closed lambda-term or not. DP was first considered about Full Type Structures by G. Plotkin in 1973, [Plo73]. R.Statman [Sta82] pointed out that deciding it would solve at once quite a few existential problems of the typed lambda-calculus, the most famous of which is the (still open) Matching Problem: "Given lambda-terms A, B, is there a lambda-term X such that AX=B?" Then DP became a little Graal, finally proved undecidable by R. Loader in 1993, [Loa01]. Is that the end of the story? One may object that in smaller models than the Full Type Structures, the few lambda-definable functionals would not be so easily lost and that deciding definability in any class of (smaller) models that is strongly complete with respect to the lambda-calculus (in the sense of [Sta82]) would also yield the benefits pointed out by Statman. Unfortunately, we will show in this talk that the restriction of DP to a fixed model M is actually undecidable for a fair amount of models M, including all the non trivial stable models and order extensional models, except possibly the 2 element Scott model. These stronger results were obtained by cleaning previous proofs and by identifying their efficient ingredients. This work of simplification also yields a particularly short and easy proof of the undecidability of DP for Church pure typed lambda-calculus that will first be detailed.

[Plo73] Gordon Plotkin. Lambda-definability and logical relations. Memorandum SAI-RM-4, University of Edinburgh, 1973.

[Sta82] Richard Statman. Completeness, invariance and lambda-definability. JSL 47:17-26, 1982.

[Loa01] Ralph Loader. The Undecidability of lambda-Definability. In Logic, Meaning and Computation: Essays in Memory of A. Church, 331-342, Anderson & Zeleny editors, Kluwer Acad. Publishers, 2001.  
21.11.2007
Tomasz Połacik, Instytut Matematyki, Uniwersytet Śląski, Katowice
Podstawy Informatyki
Problemy modeli Kripkego dla teorii pierwszego rzędu
Jest faktem ogólnie znanym, że modele Kripkego stanowią ważne i efektywne narzędzie służące do badania intuicjonistycznych teorii pierwszego rzędu. Na przykład, znane są ich liczne interesujące zastosowania w przypadku takich konstruktywnych teorii jak arytmetyka Heytinga czy intuicjonistycza teorii mnogości Kripkego-Platka. Na uwagę zasługuje jednak fakt, że w przeciwieństwie do sytuacji teorii modeli klasycznych, wciąż brakuje ogólnych metod i konstrukcji dla modeli Kripkego.
Przypomnijmy, że - nieformalnie - na model Kripkego możemy patrzeć jak na rodzinę klasycznych modeli dla danego języka pierwszego rzędu, w której określony jest porządek wyznaczony przez homomorfizmy modeli tej rodziny. Na całej tej strukturze zdefiniowane jest pojęcie spełnialności. Przy czym, w odróżnieniu od klasycznej spełnialności (w sensie Tarskiego) traktowanej lokalnie, w pojedynczym modelu rozważanej rodziny, spełnialność zdefiniowana na modelu Kripkego jest spełnialnością intuicjonistyczną. Nietrudno jest zauważyć, że model Kripkego wyznaczony przez pojedynczy klasyczny model M z identycznościowym homomorfizmem może być utożsamiony z M widzianym jako model klasyczny. W tym sensie, pojęcie modelu Kripkego można uważać za uogólnienie klasycznego pojęcia modelu pierwszego rzędu. W sposób naturalny powstaje więc kwestia stosownego uogólnienia pojęć i związków klasycznej teorii modeli na przypadek modeli Kripkego.
Jedno z podstawowych zagadnień teorii modeli dotyczy elementarnej równoważności. W referacie rozważony zostanie problem elementarnej równoważności w odniesieniu do modeli Kripkego, a w celu jego rozwiązania, wprowadzone zostanie pojęcie bisymulacji dla modeli Kripkego pierwszego rzędu. Pojęcie to jest, z jednej strony, rozszerzenieniem pojęcia bisymulacji dla modeli Kripkego dla intuicjonistycznej logiki zdań oraz, z drugiej strony, uogólnieniem - w opisanym wyżej sensie - pojęcia gry Ehrenfeuchta dla klasycznych modeli pierwszego rzędu. Oprócz omówienia podstawowych własności bisymulacji, zostaną zaprezentowane również jej zastosowania. W szczególności, przedstawiona zostanie konstrukcja elementarnego podmodelu modelu Kripkego.  
24.10.2007
17.10.2007
10.10.2007
Bartosz Walczak
Informatyka Teoretyczna
Algorithmic meta-theorems and treewidth

Algorithmic meta-theorems are algorithmic results that apply to whole families of combinatorial problems, instead of just specific problems. These families are usually defined in terms of logic and graph theory. We focuse on the model checking problem, which is to decide whether a given graph satisfies a given formula. Some restrictions of this problem to specific classes of graphs (with bounded treewidth, excluded minors etc.) turn out to be fixed-parameter tractable (fpt). We define combinatorial notions of tree decomposition, treewidth and local treewidth, and prove that MSO model checking on graphs with bounded treewidth and FO model checking on graphs with bounded local treewidth are fpt. The latter result uses Gaifman's Locality Theorem, which in one of basic tools in finite model theory. The talks are based on a paper by Martin Grohe [4].  
References:
[1] B. Courcelle. Graph rewriting: An algebraic and logic approach. Handbook of Theoretical Computer Science, volume B, pp. 194-242, 1990.
[2] J. Flum, M. Grohe. Fixed-parameter tractability, definability, and model checking. SIAM Journal on Computing, 31(1):113-145, 2001.
[3] H. Gaifman. On local and non-local properties. Proceedings of the Herband Symposium, Logic Colloquium `81, pp. 105-135, 1982.
[4] M. Grohe. Logic, graphs, and algorithms. 2007.
http://www2.informatik.hu-berlin.de/~grohe/pub/meta-survey.pdf

26.04.2006
Gabor Kun,Loránd Eötvös University, Budapest
Informatyka Teoretyczna
Forbidden patterns and homomorphism problems
We say that two subclasses of NP are (computationally) equivalent if for every language in one class there is a polynomially equivalent one in the other class. A typical equivalence class is the class of k-colouring problems (k in N), it is always in P or NP-complete. NP turned out not to be equivalent to this class (unless P=NP): there is a problem in NP which is neither in P nor NP-complete.

We show some types of combinatorial problems like edge colourings or graph decompositions expressing the full computational power of the NP class. Hence these problem classes contain also some problems of "exotic" complexity.

The first natural candidate that seems not to be equivalent to NP is the class called MMSNP (Monotone Monadic Strict NP) or Forbidden patterns problem. (A typical example for a language in the class: graphs with vertex set partitionable into two subsets without triangle.) This class is conjectured to contain only NP-complete and polynomial time solvable problems.

We prove that the class MMSNP can be expressed in the simpler terminology of relational structure homomorphism problems (called Constraint Satisfaction Problems): such a language contains for a fixed structure A the relational structures that can be mapped to A. homomorphically. The first such result was only a random equivalence. The proof of the deterministic equivalence uses some type of expander structures, a typical tool in derandomization.  

04.01.2006
Wojciech Jawor,University of California, Riverside
Informatyka Teoretyczna
Job Scheduling in Next Generation Computer Networks
Two online job scheduling problems arising in next generation computer networks are discussed.

In the first problem [3] the goal is to schedule n jobs on m identical machines, without preemption, so that the jobs complete in the order of release times and the maximum flow time is minimized. This problem arises in network systems with aggregated links, when it is required that packets complete their arrivals at the destination in the order of their arrivals at the receiver. This requirement is imposed by the IEEE 802.3 standard describing link aggregation in Local Area Networks. We present a deterministic algorithm Block with competitive ratio O(\sqrt{n/m}) and show a matching lower bound even for randomized algorithms.

The second problem [1,2] is an online unit-job scheduling problem arising in networks supporting Quality of Service. Jobs are specified by release times, deadlines and nonnegative weights. The goal is to maximize the total weight of jobs, that are scheduled by their deadlines. We show that there does not exist a deterministic algorithm with competitive ratio better than 1.618 (the golden ratio). We also give a randomized algorithm with competitive ratio 1.582, showing that randomized algorithms are provably better than deterministic algorithms for this problem.  


References:
F.Y.L.Chin, M.Chrobak, S.P.Y.Fung, W.Jawor, J.Sgall and T.Tichy, Online Competitive Algorithms for Maximizing Weighted Throughput of Unit-Jobs
W.Jawor, M.Chrobak and Ch.Dürr, Competitive Analysis of Scheduling Algorithms for Aggregated Links