Research Areas

  • Algorithm Engineering for Dynamic Graph Optimization Problems in Concrete Applications

     

    Prof. Matthias Müller-Hannemann; Martin-Luther-Universität Halle-Wittenberg;

    Prof. Dr. Karsten Weihe; Technische Universität Darmstadt

     

    Project website

     

    In many applications of graph algorithms or graph optimization problems, the graphs or the instances undergo frequent small changes. A graph is called dynamic if it is changed by a sequence of change operations. The goal of dynamic graph algorithms is to recompute optimal solutions after updates faster than by resolving from scratch.

    The focus of this project is on several concrete applications of dynamic graph optimization problems, in particular

    1. speed-up techniques for time-dependent multi-criteria shortest path problems in a dynamic scenario,
    2. real-time timetable information of train connections in case of delays caused by construction work, technical defects or other reasons,
    3. passenger-oriented delay management in public transport.

    Using techniques of Algorithm Engineering we aim at closing the current gaps to practical solutions.

     
  • Algorithm Engineering for Efficient Genome Comparison

    Prof. Dr. Knut Reinert; Freie Universität Berlin

     

    In the last two decades, the field of genomics has seen large changes, fueled by technological advances in sequencing technology. Nowadays, important large mammalian genomes (e.g. human, rat, mouse) are completely available. Maybe even more impressive, the time/cost to read genomic data has sunk by at four/five orders of magnitude. In the next years, the rate of generated data will continue to increase as the cost will continue to decrease.

    Clearly, this huge amount of data calls for the usage of computers and algorithms and both have successfully been applied to support biologists in their work. Without the contribution of computer science and bioinformatics, the biological and medical advances driven by genomics would have been impossible.

    The pioneering work in Bioinformatics is done very closely to the sequencing labs and biologists. There, it is most important for a computational result to be available quickly. As long as the result is not obviously lacking, the methodology of designing, implementing and evaluating the programs, respectively algorithms is not questioned. Often, sequence analysis programs are written with ad hoc heuristic, tailored to the one use case at hand. Besides some simple metrics like "number of aligned bases/genes", experimental evaluation is often limited to anecdotal considerations. One reason for this is the lack of established metrics.

     

    Benchmarks for Read Mapping

    Read Mapping is one key component for modern genomics. In the last decade, tens of papers have appeared on this topic, due to the large advances that have been made in sequencing technology. Clearly, it is important for Biologists and Bioinformaticians to select a good read mapper objectively. However, most published read mappers solve slightly different problems.

    Beginning from a precise definition of the problem of read mapping we are developing a precise formulation of a solution to this problem. Using these precise formulations, we are then designing algorithms and writing programs to evaluate read mapping programs in a fair and objective manner. These are then tied together by wrapping scripts to provide a comprehensive read mapping benchmark.

     

    Whole Genome Comparison

    Complex models that capture each and every biological detail quickly become computationally intractable. Simple models, however, are vulnerable to an excess of abstraction despite their ease of computability. These models can ultimately lead to efficient algorithms producing useless results. We plan to review approaches from the literature, create precise problem formulations from the whole genome alignment and comparison from these, introduce metrics for whole genome alignments and design/implement/evaluate (i.e. engineer) algorithms that solve these problems.

     
  • Algorithm Engineering for MONET and related covering problems

     

    Prof. Dr. Martin Mundhenk; University of Jena

     

    Project website

     

    Given two monotone boolean formulae in different normalforms, i.e. conjunctive and disjunctive normalform, the problem Monet asks about the equivalence of these two formulae. Without the restriction that both monotone formulae are given in different normalforms, the problem becomes coNP-complete in case of arbitrary monotone formulae or trivial in case of equal normalforms. Furthermore, the problem is coNP-complete for arbitrary boolean formlae in normalform. There is nothing known about the exact complexity of Monet, neither a polynomial upper bound nor a non-trivial lower bound. There are many algorithms for the decision and the computation problem - the best upper bound is quasi-polynomial in the length of input and output.
    In particular, we want to compare the important algorithms, in due consideration of algorithm engineering. With this comparison, we hope for more information on hard instances for these algorithms and want to improve them. Also, there are many questions about new methods, e.g. randomization, multiplication and kernelization.
    The area of applications in theory and practice is wide. Thus, we hope for many real world instances, e.g. from data mining or e-commerce.

    In summary, our goals are

    • Implementation and comparison of several known algorithms, in due consideration of algorithm engineering
    • "Upgrades" for these algorithms, e.g. easy cases and data structures
    • Creation of a test-library with miscellaneous instances
    • New methods for solving the problem, e.g. randomization, multiplication and kernelization
     
  • Algorithm Engineering for Network Problems

    Prof. Dr. Susanne Albers; Albert-Ludwigs-Universität Freiburg

     

    Project website

     
  • Algorithm Engineering for Problems in Computer Graphics

    Prof. Dr. math. Friedhelm Meyer auf der Heide, Dr. rer. nat. Matthias Fischer; Universität Paderborn


    Project website

     
  • Algorithm Engineering for Randomized Rounding Procedures

    Prof. Dr. Benjamin Doerr; Max-Planck-Institut für Informatik, Saarbrücken

     

    Project website

     

    Randomized rounding is one of the core methods of randomized computation, and is at the heart of many algorithms with good theoretical properties. However, despite this, and despite the long roots of the theoretical method, it seems that the performance of the method has scarcely ever been empirically evaluated, and implementations of the method are found only in isolated special cases.

    Moreover, recent theoretical developments have extended the range of applicability of the method to include further ranges of problems, such as the problem of multi-commodity flow, by providing methods of rounding which allow certain conditions (so-called hard constraints) to be maintained with a guarantee. Different approaches have been suggested for this, in some situations without any obvious a priori indications of one being preferable to the other, making the need for empirical evaluations and comparisons stronger.

    One aim of the project is thus to provide these much-needed empirical evaluations, comparing the methods both to one another and to other suggested methods (not of the randomized rounding type); another is to, based on the outcome of this, further the implementation and design of rounding algorithms to practical maturity.

    One particular application where randomized rounding tools have seen some recent use is the creation of good sets of sampling points ( low-discrepancy point sets) in the high-dimensional unit cube, which is needed in e.g. numerical integration problems. This topic is also part of our work.

     
  • Algorithm Engineering for Real-time Scheduling and Routing

    Prof. Dr. Friedrich Eisenbrand; École Polytechnique Fédérale de Lausanne
    Prof. Dr. Martin Skutella; Technische Unversität Berlin

     

    Project website

     

    While 20 years ago, microprocessors have mostly been used in desktop computer workstations and large-scale computers they have meanwhile become ubiquitous. They are used in nearly all areas of technology and specifically in safety and mission critical applications. The future vision of the automotive and avionics industry is complete, safe and reliable drive-by-wire and fly-by-wire respectively. Such industrial applications gave rise to the field of computer science which is called real-time scheduling and routing. Whereas classical scheduling deals with the distribution of a finite set of tasks to a set of machines, real-time scheduling deals with recurring tasks which are often periodic and thus appear infinitely often. Since the processors in a real-time system communicate by passing messages, the messages have to be routed through the architecture (modeled as a directed graph) additionally.

    The goal of the project is to develop, implement, analyze and test algorithms for real-time scheduling and routing problems using methods of linear and integer programming as well as various scheduling and routing techniques from discrete optimization.

     
  • Algorithm Engineering for the Basic Toolbox

    Prof. Dr. Peter Sanders; Karlsruhe Institute of Technology

     

    Project website

     

    This project addresses algorithm engineering for basic algorithms and data structures that are the most important building blocks for many computer applications — sorting, searching, graph traversal,. . . . Although this topic is as old as computers science itself, many interesting new results have appeared in recent years and many gaps between theory and practice remain. In particular, many interesting approaches have not been thoroughly tried experimentally. Ever more complex hardware with memory hierarchies and several types of parallel processing requires refined models, new algorithms, and efficient implementations. We plan to incorporate the most successful implementations into reusable software libraries such as the the C++ STL.

     
  • Algorithms for Complex Scheduling Problems

    Prof. Dr. Rolf H. Möhring; Technische Universität Berlin

     

    Project website

     

    Real-world scheduling problems are usually much more complex than most of the models that were considered in algorithm theory so far. Typically, optimal solutions cannot be found in reasonable computing time. However in practice, good solutions have to be computed fast. To meet the runtime requirements, mostly (simple) heuristics are established in industry, not taking into account results and techniques that are know for related theoretical problems. We aim to fill this gap between theory and practice, considering the following fields of scheduling:

    • Combined Sequencing and Scheduling
    • Turnaround Scheduling

    This necessitates investigations on the structure and complexity of these problems. In a second step, the insights need to be transferred into efficient algorithms that produce provably good solutions also for large real-world problems within an appropriate computing time. Realistic data is available from cooperations with T.A. Cook Consultants, Berlin, PSI Business Technology and Salzgitter Flachstahl GmbH? , and Sachsenmilch AG, respectively (10.000 - 100.000 activities for turnaround scheduling, and two examples from sequencing and scheduling, one from coil coating with 20-40 coils, and another one from dairy industry with 30-40 jobs).

     
  • Algorithms for Modern Hardware: Flash Memory

    Prof. Dr. Ulrich Meyer; Universität Frankfurt

     

    Project website

     

    Modern computers significantly deviate from the uniform cost model traditionally used in algorithms research. In the first phase of the proposed project we want to deal with a very recent trend: flash memory. Originally used in small portable devices, this block based solid-state storage technology is predicted to become a new standard level in the PC memory hierarchy, partially even replacing hard disks. Unfortunately, the read/write/erase performance of flash memory is highly dependent on access patterns, filling degree, and optimizations to prevent the devices from early wear-out. Therefore, even cache-efficient implementations of most classic algorithms may not exploit the benefits of flash.

    After appropriately modelling flash memory we aim at the design, analysis, implementation, and experimental evaluation of graph algorithms tailored to these models. We will cover both fundamental questions (like sorting, spanning trees, graph traversal, lower bounds) and applied problems (e.g. finding cycles in networks arising from cryptography applications) on massive graphs using flash memory. Some of these problems are still open for the easier external-memory model, too, in particular for directed graphs, in dynamic or approximate setting. The best solutions will be added to libraries like STXXL. The potential benefits of this project are significantly improved methods to process large-scale directed graphs like the web or social network graphs. Basic graph algorithms for depth first search, breadth first search, shortest paths, spanning trees or network flows are well established compenents of every algorithms course and many applications. Theory has produced advanced algorithms for these problems that should theoretically be superior to the simple text book methods. The starting point of this project is to explore the possibility to transform such advanced approaches in to practicable algorithms and implementations. We are particularly interested in algorithms that can exploit parallelism and memory hierarchies.

     
  • Clustering of Static and Temporal Graphs

    Prof. Dr. Dorothea Wagner; Karlsruhe Institute of Technology

     

    Project website

     

    As a result of the rapidly growing extent of electronically available data, methods for the localization of relevant information and their intelligent organization are of increasing importance. The field of algorithmics is faced with the challenge of providing efficient and feasible algorithms for data clustering. This does not only encompass the development of algorithms that work well for specific applications or sets of data, but also the systematic design of algorithms for formally and rigorously defined problems and their analysis and evaluation with respect to adequate criteria of quality. In this project algorithms for the clustering of graphs shall be developed.

    The focal point of our interest are clusterings which are based on the intuition of identifying as clusters dense subgraphs that are loosely connected among one another. To this end we shall underlay a systematic classification of quality measures, which allows for an unbiased comparison of various methodologies. In particular we will engage in the as yet novel field of clustering evolving or time-dependent graphs. As far as possible, the evaluation of algorithms will be based on theoretical analyses, however, fundamentally it shall be conducted experimentally, namely employing suitably generated graphs on the one hand and taking into account real instances on the other hand. As usual in algorithm engineering, we shall traverse the entire cycle of design, analysis, implementation and experimental assessment, in particular reincorporating the results of experiments in the design and the analysis. Our implementations shall be made available in form of software-tools consisting of algorithms, quality indices, measures and procedures of comparison, graph generators as well as benchmarks.

     

    Software Projects

    Further software projects can be also found here.

     
  • Design and Analysis of Application-Oriented Geometric Algorithms

    Prof. Dr. Helmut Alt; Freie Universität Berlin

    Prof. Dr. Christian Knauer; Universität Bayreuth

     

    Project website

     

    The goal of this project is to close a gap between theoretically designed and studied algorithms and the methods that are widely used in practice, especially heuristics. We consider primarily but not exclusively the problem of the geometric shape matching. There are many algorithms and heuristic methods that work well and are widely used in practice but lack theoretical analysis. We analyze some of such heuristics with respect to the following questions:

    1. What is computed by the method? The quality of the computed results is often not known or even not clearly defined.
    2. What is the running time of the method? In many cases there is a certain trade-off between the running time and the quality of results.

    It might be necessary to constraint the inputs of the methods and to characterize the cases that occur in practice in order to be able to perform the analysis. Besides these theoretical studies we also plan to do prototype implementations of the designed algorithms to verify their practicability. The research focuses on the following problems:

    • Patterns and shapes
      • Probabilistic shape matching
      • Symmetry detection
      • Application oriented methods
    • Packing and stacking geometric objects
    • Realistic input models
      • Geometric shape matching
      • Geometric search structures
    • Principal component analysis



     
  • Disturbed Diffusion for Partitioning and Clustering Graphs

    Prof. Dr. Burkhard Monien; Universität Paderborn

     

    Project website

     

    Many natural and scientific transport phenomena can be modelled by diffusive processes. These processes perform exchanges of loads between neighboring entities in order to obtain a balanced load distribution. We have modified the first order diffusion scheme (FOS) on graphs with a suitable disturbance such that its convergence state results in a non-balanced load distribution. These loads then reflect structural properties of the graph, which makes it possible to identify densely connected regions. This idea has been used successfully to develop an algorithm for graph partitioning and for balancing workloads of processors in parallel numerical simulations. Compared to classical methods, our new algorithm shows significant advantages regarding partitioning quality and migration costs. However, its running time is not satisfactory yet.

    In this project we examine the new diffusion-based algorithm thoroughly on a theoretical and practical basis. We see promising approaches to solve the running time problems while keeping the quality improvements. This aims at the development of a graph partitioner which results in a signficantly more efficient parallel implementation and execution of parallel adaptive numerical simulations than before. Apart from that, the disturbed diffusion concept is also to be used for the automatic identification of tightly coupled groups (clusters) of different sizes in static and dynamic graphs and networks with local knowledge only. The resulting algorithms will be theoretically analyzed, implemented, parallelized, and finally, after a thorough experimental evaluation, made available as libraries.

     
  • Engineering of Approximation Algorithms for Matching and Covering in Hypergraphs and Large Graphs

     

    Prof. Dr. rer. nat. Anand Srivastav; Christian-Albrechts-Universität Kiel

     

    Project website

     

    Matching in Hypergraphs

     

    In the b-matching problem, we aim to select as many edges from a hypergraph as possible while allowing only b of them to overlap in each vertex. This problem and related ones have many applications in medicine, computational geometry, and combinatorial auctions. The b-matching problem is NP-hard even if restricting to simple-structured instances. For large b, constant-factor approximations are achieved by randomized rounding. However, applications often demand small b. For small b, oblivious greedy algorithms are state-of-the-art on the theoretical side; those incorporate graph parameters such as the number of vertices in the approximation guarantee.

    A goal of our project is the development of non-oblivious algorithms that work well for small b, and better than the oblivious ones. We consider combinatorial and LP-based techniques, separately as well as in combination. At the same time, we improve the known inapproximability theory.

     

    Matching in Large Graphs

     

    With graphs containing a large number of edges, the random access model has to be reconsidered, since random access can result in excessive seek times. The new approach is to treat a graph as a stream of edges. Algorithms now cannot simply determine, e.g., whether two vertices are adjacent or not. Instead, algorithms have to make use of edges as they go by in the stream. They may pass over the stream multiple times, but the number of passes should be strictly limited.

    In our project we design streaming algorithms for matching in bipartite and general graphs. In the first phase, we presented an approximation scheme for the bipartite case and proved a worst-case bound on the number of passes depending only polynomially on the approximation parameter. Ongoing research is concerned with experimentally analyzing and improving this algorithm. Moreover, the extension to general graphs is investigated.

     
  • Exact Integer Programming

    Prof. Dr. Dr. h. c. Martin Grötschel; Zuse Institut Berlin

     

    Project website

     

    Available solvers for mixed-integer programs (MIPs) focus on finding close to optimal solutions for feasible instances. Users are aware that the answers are only accurate with respect to feasibility and optimality tolerances, but for many application, this is regarded to be acceptable.

    This situation changes fundamentally when MIPs are used to study theoretical problems, when instances become pure feasibility questions (and are infeasible), and when wrong answers can have legal consequences. Examples for applications where inexact solvers are not adequate include combinatorial design theory, chip verification, and contracting by combinatorial auctions.

    In this project, we develop and implement an approach for the exact solution of MIPs. By extending the constraint programming/MIP solver SCIP, we want to provide a tool, which is freely available to the scientific community, to compute exact optimal solutions or, perhaps even more important, exact certificates for infeasibility.

     
  • External Goal-Directed Search in Implicit Graphs

    PD Dr. Stefan Edelkamp; Universität Dortmund

     
  • Generic Decomposition Algorithms for Integer Programs

    Prof. Dr. Marco Lübbecke; RWTH Aachen University

     

    Project website

     

    There is no alternative to integer programming when it comes to computing proven quality or even optimal solutions to large-scale hard combinatorial optimization problems. In practical applications, matrices often have special structures exploitable in decomposition algorithms, in particular in brance-and-price. This opened the way to the solution of mixed integer programs (MIPs) of enormous size and complexity, both from industry and within mathematics, computer science, and operations research.

    Yet, as the state-of-the-art, branch-and-price is implemented ad hoc for every new problem. Various frameworks are very helpful in this respect, still, this requires a solid expert knowledge. This project aims at making a significant contribution towards a generic implementation of decomposition algorithms. As a longterm vision, a MIP solver should be able to apply a decomposition algorithm without any user interaction. A precondition for such an automation is the answer to important algorithmic and theoretical questions, among them:

    • recognition of decomposable structures in matrices and/or MIPs
    • development of a theory (and appropriate algorithms) for evaluating the quality of a decomposition

    In this project we address these questions. From a mathematical point of view, there are interesting relations to polyhedral combinatorics, graph theory, and linear algebra. A generic implementation of our findings is planned to be provided to the research community. To this end we closely cooperate with developers of MIP solvers (such as SCIP) and modeling languages (such as GAMS).

     
  • Kunst! - Exact Algorithms for Art Gallery Variants

    Dr. Alexander Kröller; Braunschweig Institute of Technology

     

    Project website

     

    The classical Art Gallery Problem asks for the minimum number of guards that achieve visibility coverage of a given polygon. This problem is known to be NP-hard, even for very restricted and discrete special cases. For the general problem (in which both the set of possible guard positions and the point set to be guarded are uncountable), neither constant-factor approximation algorithms nor exact solution methods are known. We develop a primal-dual approach for general art gallery problems in arbitrary polygons with holes, in which guards can be placed anywhere, such that the entire interior of the polygon is guarded. Our method computes a sequence of lower and upper bounds on the optimal number of guards until-in case of convergence and integrality-eventually an optimal solution is reached. Our algorithm is based on a formulation of the problem as a (covering) linear program. It solves the problem using a cutting plane and column generation approach, i.e., by solving the primal and dual separation problems. Computational results show the usefulness of our method.

     
  • Planarization Approaches in Automatic Graph Drawing

    Prof. Dr. Michael Jünger; Universität zu Köln
    Prof. Dr. Petra Mutzel; Universität Dortmund

     

    Project website

     

    This automatic graph drawing project concerns the transfer of planarization approaches that have been used primarily in academia so far, to practical applications. We consider a selection of important use-cases, and we try to address the problems arising in these applications. We integrate various application specific constraints into the planarization method, and we provide open source software (see OGDF library) as well as benchmarks instances.


    Currently, the application domains include visualization in software engineering and biochemistry, e.g. the visualization of metabolic and protein interaction networks. We will continuously apply the algorithm engineering cycle to our results, so that the engineering process is significantly influenced by the feedback from our cooperation partners.

     
  • Practical Approximate Pattern Matching with Index Structures

    Prof. Dr. Ernst W. Mayr, Technische Universität München

     

    Project website

     

    In this research project we apply the whole spectrum of algorithm engineering to index structures for approximate pattern matching. In contrast to the case of exact searching there is a big gap between theory and practice for the approximate case. We want to close this gap with our project. Some of the approaches which yield theoretical advances, seem to be not applicable in practice due to big constants.

    A library of pattern matching algorithms would be of great benefit for other researches and computer scientists, providing efficient methods for index-based approximate pattern matching. The library is supposed to be an interface for the ongoing progress in the field of indexes and algorithms. The implementations form a working basis for the further algorithm engineering process, but as well can be used by e.g. biologists to solve real world problems, like analyzing biological data. The library is supposed to offer index structures and search algorithms, developed and optimized espacially considering the following aspects:

    • Efficient applicability to for big, real world problem instances, while using general and flexible error models.
    • Index structures and search algorithms in secondary memory.
    • Distributed index structures and search algorithms.
    • Cache-efficient search algorithms.
     
  • Practical theory for clustering algorithms

    Prof. Dr. rer. nat. Johannes Blömer, Universität Paderborn

    Prof. Dr. rer. nat. Christian Sohler, TU Dortmund

     

    Project website

     

    The asymmetry of the KLD

    Clustering is the partition of a set of objects into subsets (cluster) such that objects inside one cluster are similar to each other while objects from different clusters are not. In order to compute such a partition it is required to define what is meant by similarity. E.g. if we have a map and choose the recorded cities as our set of objects, we can interpret a short distance between the cities as similarity. Then the clustering corresponds to a division of our map into urban areas. Clustering algorithms try to compute clusterings that are as good as possible. One application area is data reduction. For that purpose every cluster is replaced by a proper substitute.

    In a research project aided by the DFG we concentrate on two aspects. On the one hand we analyse algorithms of widespread use in order to find out how good or bad they are in theory. On the other hand we try to design provably good algorithms. Particular similarity measures like asymmetric measures are of special interest throughout our examination. One example of an asymmetric measure is the Kullback-Leibler divergence that enables us to talk about the similarity of probability distributions. The figure shows the asymmetry of the KLD. The two colored curves have the same distance to the marked center, in one case measured from the curve to the center and in the other case the other way around.

     
  • Representation Solving

    Prof. Dr. Alexander May; Ruhr-Universität Bochum

     

    Project website

     

    For many NP hard problems like the subset sum problem, syndrome decoding or the computation of a shortest vector in a lattice we basically know algorithms which perform not much better than a simple brute force search over the solution space. This might serve our intuition that we actually expect exponential complexity for NP hard problems in general.
    There are few general techniques that help to lower the running time exponent. One of them is the so-called sort and match technique, where the search space is splitted in two parts of equal size. This technique can be used to obtain a time memory tradeoff, e.g. instead of complexity O(2^n) and constant space we obtain O(2^{\frac n 2}) both for time and space.
    In 2010, Howgrave-Graham and Joux presented another technique, that we call the representation technique, in the context of solving the subset sum problem. The idea is to represent the solution of a subset sum in an ambiguous way which results in an exponential number of representations. Out of this exponential number, one looks for a special unique solution, again. Surprisingly, this blowup-and-cut strategy helped Howgrave-Graham and Joux to lower the subset sum complexity from O(2^{0.5n}) to O(2^{0.337n}).
    Our goal is give a general analysis of the representation technique that allows to apply the technique as a generic algorithmic tool in various contexts. Our first application is an improvement of the best known algorithm for decoding general linear codes, for which we can significantly lower the complexity. Further applications of the representation technique are among others the computation of shortest vectors and a new combinatorial algorithm for a hard problem in cyclic lattices that is underlying the NTRU encryption scheme.

     
  • Robust Optimization Algorithms

    Prof. Dr. Anita Schöbel; Georg-August-Universität Göttingen

     

    Project website

     

    An exact optimal solution for an optimization problem is often useless for practical applications if the assumptions change or if unforeseen events occur. This is a reason for the gap between theory and practice in optimization. In the project Robust Optimization Algorithms we aim at improving this situation by developing robust algorithms which are applicable in practice. To this end it is necessary to adapt the robustness concepts and the algorithms to the problem instance and to the uncertainty type found in the data.

     
  • Simple and Fast Implementation of Exact Optimization Algorithms with SCIL

     

    Prof. Dr. Ernst Althaus; Universität Mainz
    Prof. Dr. Christoph Buchheim; Technische Universität Dortmund

     

    Project website

     

    SCIL is a software that allows a simple implementation of exact algorithms for difficult optimization problems. It is based on efficient Branch-and-Cut procedures, but compared to other approaches it also supports symbolic constraints instead of linear inequalities.

    The project's goal is further development of SCIL. We focus on the integration of new separation methods that will lead to a considerable speedup and to the development of new modelling possibilities. An important point is the ability to use polynomial objective functions and logic constraints. Moreover SCIL should be extended such that numerically exact optimal solutions could be computed - an important feature, e.g., for automatic verification of hybrid systems.

    In order to ensure a practice-oriented development, another part of the project focuses on solving difficult practical optimization problems that are particularly well suited as applications for SCIL.

     

     

     

     

     

     
  • Structure Based Algorithm Engineering for SAT Solving

    Prof. Dr. Michael Kaufmann, Prof. Dr. sc. techn. Wolfgang Küchlin; Universität Tübingen

     

    Project website

     

    Given a boolean formula the satisfiability problem (SAT) asks if there exists a truth assignment to the variables of the formula such that the formula evaluates to true. Even though this problem seems to be purely theoretical there are several real-world problems that can be formulated as a SAT problem like hardware and software verification, planning, automotive product configuration and problems in bioinformatics.

    From the theoretical point of view SAT is NP-complete and should thus be not solvable in feasible time. However, in the last 15 years state-of-the-art SAT Solvers became able to tackle many real-world SAT instances with hundreds and thousands of variables.

    We aim to improve SAT solving by analysing and enhancing current solving techniques. Our main goal is to allow for more structural analysis of instances during the solving process. Moreover, we implement and evaluate hybrid and parallel solving techniques to combine different successful approaches to one robust solver.

    This work requires extensive evaluation and verification of the solver on huge sets of benchmarks. In the international SAT-competition 2009 our solver was able to win the silver medal in the category of satisfiable crafted instances.

     
Results 1 - 27 of 27