Continuing the tradition started at previous MIP workshops, there was a poster session at MIP2008.
Two-period Relaxations on Big-bucket Production Planning Problems(Presentation)
We investigate multi-level, multi-item production planning problems with big bucket capacities, i.e., multiple items competing for the same resources, which frequently occur in practice yet remain daunting in their difficulty to solve. We propose a methodology to generate all valid inequalities based on the convex hull closure of the two-period subproblems, which may be the simplest model that captures the basis of the difficulty of these problems. We will present computational results and discuss possible extensions.
Joint work with Andrew Miller
A Hybrid Approach to Beam Angle Optimization in Intensity-Modulated Radiation Therapy(Presentation)
Intensity-Modulated Radiation Therapy is the technique of delivering radiation to cancer patients by using intensity-modulated beams, with the aim of reducing the intensity of the beams that go through critical structures and reaching the dose prescription in the target volume. Two decisions are of fundamental importance: to select the beam angles and to compute the intensity of the beams used to deliver the radiation to the patient. Usually, these two decisions are made separately: firstly, the treatment planners, on the basis of experience and intuition, decide the orientation of the beams and then the intensities of the beams are optimized by using an automated software tool. Automatic beam angle selection is an important problem and is generally based on human experience by now. In this context, we face the problem of optimizing both the decisions, developing an algorithm which automatically selects the beam angles and computes the beam intensities. This problem is called Beam Angle Optimization. It appears to be highly non-convex and non-linear, and with many local minima. Hence, we propose a Hybrid Heuristic method, which combines a Simulated Annealing procedure with the knowledge of the gradient of the objective function. The beam intensities are optimized by solving a Linear Programming model, with the objective of minimizing the overall radiation delivered to the patient. Experimental results are performed on simulated data, showing the advantages that come from our approach.
Four Easy Pieces
We would like to discuss four ideas we are currently exploring: Complex Primal Heuristics for Complex Problems DRONe, a two-stage primal heuristics we are playing with, starts by: 1. Identifying parts of the model implementing functions such as "maximum over a set of binary variables" 2. Removing the related "auxiliary" variables and constraints 3. Aggregating the binary variables in questions within the remaining "core": This amounts to a variant of set covers (!) 4. Solving the aggregated core Then, it proceeds to the second stage: 5. Solving the complete problem with maxima (or similar) forced to values obtained by solving the aggregated core This was originally motivated by timetabling applications: Fix assignment to days first, do periods second. Notice the first stage might provide a valid lower bound. Primal Heuristics Exploiting the Conflict Graph Another primal heuristic could hence work as follows: 1. Identify the conflict graph given by the set-partitioning subset of a MIP instance, traditionally used to generate clique cuts 2. Produce a number of large independent sets, and order them by the expected objective function values of the corresponding assignment of values to binary variables 3. Check feasibility of the corresponding reduced instances, starting from the most promising dives Again, we would be grateful for any pointers to related work. The Effects of Strong Branching for Free? The set-partitioning subset of a MIP instance defines a conflict graph, traditionally used to generate clique cuts, and perhaps usable in primal heuristics. Could we get any hints what to branch on, however, from this? There seem to be a number of candidate branching rules, starting from the most greedy: -- Least constraint colour to most constrained node -- "Where local search for k-colouring gets stuck" -- "Where iterated local search gets stuck most often" -- "Shortest path sampled in a Zykovian merge-separate tree" -- "Shortest path in a Zykovian merge-separate tree" Outside of timetabling, we have no computational experience with this so far. Has this been done before? What can be Better than CPLEX Auto-Tuning? One can easily compile a base of rules such as if the instance has at least n times as many columns as it has rows, use heuristic H. It seems, however, that this could become much stronger, if we can do this on-line, equipped with information gathered in the present run, and in a multivariate fashion: if the instance looks like this, and the performance of H at the root node was that, multiply limit L by f. More details will be available at http://cs.nott.ac.uk/jxm/ shortly.
Dynamic programming based inequalities for the capacitated lot-sizing problem(Presentation)
Iterative solutions of dynamic programming formulations for the capacitated lot-sizing problem are used to generate valid inequalities for an equivalent mixed integer programming formulation. The cuts are shown to be quite effective in solving instances when compared to solving the MIP formulation directly with CPLEX and against other cutting plane approaches. The method may also show promise in more general applications.
A Cutting Plane Approach to Cardinality and 0-norm Optimization
Cardinality constraints appear in a large number of applications, as diverse as finance, bioinformatics, and data mining. While such applications reside in the core of our economy and welfare, optimization problems with cardinality constraints are among the hardest ones in the field of operations research. We present a polyhedral study of linear optimization with cardinality constraints. In addition, we present a branch-and-cut and a Lagrangian relaxation algorithm to tackle large-scale optimization problems with cardinality constraints. Finally, we apply the algorithm to the problem of maximizing sparsity of large-margin classifiers used in supervised machine learning.
Non-Linear Programming based heuristic for Mixed-Integer Linear Programming
From both a theoretical and a practical viewpoint, finding feasible solutions for MILP problems is, in general, difficult. In recent years, some effective heuristics, based on exploiting informations obtained solving relaxations of the original MILP, were presented (e.g., [1]).
In this work we start observing that the integrality requirement, the difficult part of an MILP problem, could be interpreted as a non-convex part of the model. Then, finding a feasible solution to a MILP problem is formulated as a Non-Convex Non-Linear Programming problem: we want to satisfy the linear constraints by minimizing a non-linear (non-convex) objective function modeling the violation of the integrality requirements. Thus, any NLP solution with 0 objective function is MILP feasible. From this viewpoint, we can integrate effective methods studied for both NLP and MILP problems. In particular, classical methods to solve NLP problems are applied (e.g., [2]), obtaining then a local solution: if such a solution has strictly positive value it fractional. In this case general purpose cutting planes for MILPs are added. It is well-known that NLP problems are hard to solve, but the recent improvements in the solvers and the exploitation of some structure allow us to obtain interesting results.
References:
Approximating the Stability Region for Binary Mixed-Integer Programs(Presentation)
We consider optimization problems with some binary variables, where the objective function is linear in these variables. The stability region of a given solution of such a problem is the polyhedral set of objective coefficients for which the solution is optimal. A priori knowledge of this set provides valuable information for sensitivity analysis and re-optimization when there is objective coefficient uncertainty. An exact description of the stability region typically requires an exponential number of inequalities. We develop useful polyhedral inner and outer approximations of the stability region using only a linear number of inequalities. Furthermore, when a new objective function is not in the stability region, we produce a list of good solutions that can be used as a quick heuristic or as a warm start for future solves.
Multi-index Transportation Polytopes and Applications to Linear and Integer Programming
Transportation polytopes are classical objects in operations research and mathematical programming. In this poster, we discuss the graphs of multi-index transportation polytopes. Bounding the diameter of polytopes is particularly interesting since the diameter is a lower bound on the number of iterations required by the simplex method using any pivot rule. We present a quadratic upper bound on the diameter of the graph of $3$-way axial transportation polytopes. Bounding the diameter of $3$-way transportation polytopes is particularly interesting since any convex polytope is the face of an axial $3$-way transportation polytope (De Loera, Onn). Moreover, any convex rational polytope is isomorphically representable as a $3$-way planar transportation polytope (De Loera, Onn).
Capacitated Multi-Commodity-Flow Cuts (Presentation)
Many general mixed integer programs in practice contain a block-structure coming from multi-commodity-flow (MCF) formulations. There might be as many network matrices as there are commodities. These network matrices all represent the same graph and they are usually coupled by some "capacity"-constraints, restricting the "flow" on arcs (or edges) of this graph.
It is known that for certain network-dimensioning problems it is possible to dramatically reduce computation times and gaps when generating cut-inequalities (and similar network based inequalities) within the branch & cut procedure. There are mainly two reasons for this. First, these inequalities are strong in the sense that they might define high dimensional faces. And second, state of the art MIP solvers (such as CPLEX or SCIP) are not able to detect network structure within general mixed integer programming formulations.
Our work focuses on automatically detecting such MCF structures together with the coupling of the commodities by capacity constraints in general MIP instances. We identify the underlying network and derive cutting planes based on the network structure. We currently concentrate on network cuts. Given a cut in the network, we aggregate flow conservation constraints for one of the two shores and capacity constraints corresponding to cut arcs. This yields base inequalities which can then be used (following the c-MIR approach (Marchand & Wolsey 1998)) to generate cut-set inequalities (Atamtuerk 2002), flow-cover inequalities (Padberg 1985, Van Roy & Wolsey 1986, Louveaux & Wolsey 2003), and the like, depending on the type of capacity constraints. These inequalities then correspond to a chosen network cut. Our implementation uses SCIP (scip.zib.de) and already shows promising results on pure network-design instances.
A Local Dominance Procedure for Mixed-Integer Linear Programming
Among the hardest Mixed-Integer Linear Programming (MILP) problems, the ones that exhibit a symmetric nature are particularly important in practice, as they arise in both theoretical and practical combinatorial optimization problems. A theoretical concept that generalizes the notion of symmetry is that of dominance. This concept, although known since a long time, is typically not used in general-purpose MILP codes, due to the intrinsic difficulties arising when using the classical definitions in a completely general context.
We study a general-purpose dominance procedure proposed in the 80's by Fischetti and Toth, that overcomes some of the main drawbacks of the classical dominance criteria. Both theoretical and practical issues concerning this procedure are considered, and important improvements are proposed. Computational results on a test-bed made of hard knapsack and network loading problems are reported, showing that the proposed method can lead to considerable speedup when embedded in a general-purpose MILP solver.
Chvatal-Gomory Cuts in Branch-and-cut-and-price Algorithms
This paper focuses on how to apply Chvatal-Gomory cuts in a branch-and-cut-and-price algorithm. A straight forward approach to develop cutting planes in a column generation context is to derive cuts from the original formulation and decompose the generated constraints. However, one can also use cutting planes derived directly from master problem variables. It may not seem obvious how this affects the column generation process as values of dual variables of the generated cuts must be taken into account. A description on how to extend the original formulation such that cut on master problem variables can be expressed in the context of the original solution space is given. Furthermore, the impact of such cuts branch-and-cut-and-price algorithm it is discussed. Especially with regard to how the cutting planes affects the complexity of the pricing problems.
Challenges in exact linear and integer programming: exact precision linear algebra
A successful approach to solving linear programming problems exactly has been to solve the problems with increasing levels of fixed precision, checking the final basis in exact arithmetic and then doing additional pivots if necessary. This work is a computational study comparing different techniques for the core element of our exact computation: solving sparse rational systems of linear equations exactly.
Joint work with William Cook and Sanjeeb Dash.
Parallel Approximation and Integer Programming Reformulation
We show that in a knapsack feasibility problem an integral vector $p$, which is short, and near parallel to the constraint vector gives a branching direction with small integer width. We use this result to analyze two computationally efficient reformulation techniques on low density knapsack problems. We do not assume any a priori structure on the weight vector. Both reformulations have a constraint matrix with columns reduced in the sense of Lenstra, Lenstra, and Lov\'asz. We prove an upper bound on the integer width along the last variable, which becomes $1, \,$ when the density is sufficiently small. In the proof we extract from the transformation matrices a vector which is near parallel to the constraint vector $a.$ The near parallel vector is a good branching direction in the original knapsack problem, and this transfers to the last variable in the reformulations.
Joint work with Gabor Pataki at UNC Chapel Hill.
Modeling Disjunctive Constraints with a Logarithmic Number of Binary Variables and Constraints
Many combinatorial constraints over continuous variables such as SOS1 and SOS2 constraints can be interpreted as disjunctive constraints that restrict the variables to lie in the union of m specially structured polyhedra. Known mixed integer binary formulations for these constraints have a number of binary variables and extra constraints that is linear in m. We give sufficient conditions for constructing formulations for these constraints with a number of binary variables and extra constraints that is logarithmic in m. Using these conditions we introduce the first mixed integer binary formulations for SOS1 and SOS2 constraints that use a number of binary variables and extra constraints that is logarithmic in the number of continuous variables constrained. We also introduce the first mixed integer binary formulations for piecewise linear functions of one and two variables that use a number of binary variables and extra constraints that is logarithmic in the number of linear pieces of the functions. We prove that the new formulations for piecewise linear functions have favorable tightness properties and present computational results showing that they can significantly outperform other mixed integer binary formulations.
Can Pure Cutting Plane Algorithms Work?
We discuss an implementation of the lexicographic version of Gomory's fractional cutting plane method and of two heuristics mimicking the latter. In computational testing on a battery of MIPLIB problems we compare the performance of these variants with that of the standard Gomory algorithm, both in the single-cut and in the multi-cut (rounds of cuts) version, and show that they provide a radical improvement over the standard procedure. In particular, we report the exact solution of ILP instances from MIPLIB such as stein15, stein27, and bm23, for which the standard Gomory cutting plane algorithm is not able to close more than a tiny fraction of the integrality gap. We also offer an explanation for this surprising phenomenon.
Lifted Valid Inequalities for Stochastic Dynamic Knapsack Sets
Polyhedral study on the stochastic dynamic knapsack problem (SDKP) has been proven to be very useful to understand the structure of stochastic lot-sizing problems and other typical stochastic integer program sets. In this paper, we investigate sequence independent lifting and partial sequence independent lifting to generate valid inequalities for SDKP. First, we derive conditions under which lifting is (complete) sequence independent, build strong superadditive approximating lifting functions and describe special cases in which the lifted inequalities are facet-defining. We then consider a partial sequence independent lifting approach, i.e. sequential lifting from path to path and sequence independent lifting inside paths, and build strong approximating lifting function for such mechanism. We finally compare these two approaches, discuss the connection with the pairing scheme, and derive cutting planes from SDKP for stochastic lot-sizing problems.