Conference 2025
Top image

 
Home
Programme
Registration
Invited speakers
Programme PhD presentations
How to get there
Conference office
 
Return to LNMB Site
 

Abstracts

Keynote presentations

Daniel Kuhn

Lecture 1: Metrizing Fairness

We study supervised learning problems that have significant effects on individuals from two demographic groups, and we seek predictors that are fair with respect to a group fairness criterion such as statistical parity (SP). A predictor is SP-fair if the distributions of predictions within the two groups are close in Kolmogorov distance, and fairness is achieved by penalizing the dissimilarity of these two distributions in the objective function of the learning problem. In this paper, we identify conditions under which hard SP constraints are guaranteed to improve predictive accuracy. We also showcase conceptual and computational benefits of measuring unfairness with integral probability metrics (IPMs) other than the Kolmogorov distance. Conceptually, we show that the generator of any IPM can be interpreted as a family of utility functions and that unfairness with respect to this IPM arises if individuals in the two demographic groups have diverging expected utilities. We also prove that the unfairness-regularized prediction loss admits unbiased gradient estimators, which are constructed from random mini-batches of training samples, if unfairness is measured by the squared 2-distance or by a squared maximum mean discrepancy. In this case, the fair learning problem is susceptible to efficient stochastic gradient descent (SGD) algorithms. Numerical experiments on synthetic and real data show that these SGD algorithms outperform state-of-the-art methods for fair learning in that they achieve superior accuracy-unfairness trade-offs - sometimes orders of magnitude faster.

Lecture 2: Wasserstein Distributionally Robust Optimization with Heterogeneous Data Sources

We study decision problems under uncertainty, where the decision-maker has access to K data sources that carry biased information about the underlying risk factors. The biases are measured by the mismatch between the risk factor distribution and the K data-generating distributions with respect to an optimal transport (OT) distance. In this situation the decision-maker can exploit the information contained in the biased samples by solving a distributionally robust optimization (DRO) problem, where the ambiguity set is defined as the intersection of K OT neighborhoods, each of which is centered at the empirical distribution on the samples generated by a biased data source. We show that if the decision-maker has a prior belief about the biases, then the out-of-sample performance of the DRO solution can improve with K - irrespective of the magnitude of the biases. We also show that, under standard convexity assumptions, the proposed DRO problem is computationally tractable if either K or the dimension of the risk factors is kept constant.


Marco Scarsini

Lecture 1: A Characterization of Optimal Queueing Regimes

We consider an M/M/1 queueing model where customers can strategically decide to enter or leave the queue. We characterize the class of queueing regimes such that, for any parameters of the model, the socially efficient behavior is an equilibrium outcome.

Lecture 2: Learning in Games with Random Payoffs

After briefly reviewing some known results about (pure) equilibria in games with random payoffs, we consider the asymptotic behavior of some learning procedures, such as Best-Response Dynamic in some classes of games with random payoffs.


Huseyin Topaloglu

Lecture 1: Coordinated Inventory Stocking and Assortment Personalization

We study a joint inventory stocking and assortment personalization problem. We have access to a set of products that can be used to stock a storage facility with limited capacity. At the beginning of the selling horizon, we decide how many units of each product to stock. Customers of different types with different product preferences arrive into the system over the selling horizon. Depending on the remaining product inventories and the type of the customer, we offer a personalized product assortment to the arriving customer. The customer makes a choice within the assortment according to a choice model. Our goal is to choose the stocking quantities at the beginning of the selling horizon and to find a policy to offer a personalized assortment to each customer so that we maximize the total expected revenue over the selling horizon. Our work is motivated by online platforms making same-day delivery promises or selling fresh groceries, which require operating out of an urban warehouse to be close to customers, but allow the flexibility to personalize the assortment for each customer. Finding a good assortment personalization policy requires approximating a high-dimensional dynamic program with a state variable that keeps track of the remaining inventories. Making the stocking decisions requires solving an optimization problem that involves the value functions of the dynamic program in the objective function. We give an approximation framework for the joint inventory stocking and assortment personalization problem that yields constant-factor performance guarantees, as well as asymptotically optimal solutions as the capacity of the storage facility and demands get large.

Lecture 2: Revenue Management with Calendar — Aware and Dependent Demands: Asymptotically Tight Fluid Approximations

When modeling the demand in revenue management systems, a natural approach is to focus on a canonical interval of time, such as a week, so that we forecast the demand over each week in the selling horizon. Ideally, we would like to use random variables with general distributions to model the demand over each week. The current demand can give a signal for the future demand, so we also would like to capture the dependence between the demands over different weeks. Prevalent demand models in the literature, which are based on a discrete-time approximation to a Poisson process, are not compatible with these needs. In this talk, we focus on revenue management models that are compatible with a natural approach for forecasting the demand. Building such models through dynamic programming is not difficult. We divide the selling horizon into multiple stages, each stage being a canonical interval of time on the calendar. We have random number of customer arrivals in each stage, whose distribution is arbitrary and depends on the number of arrivals in the previous stage. The question we seek to answer is the form of the corresponding fluid approximation. We give the correct fluid approximation in the sense that it yields asymptotically optimal policies. The form of our fluid approximation is surprising as its constraints use expected capacity consumption of a resource up to a certain time period, conditional on the demand in the stage just before the time period in question.


Angelika Wiegele

Lecture 1: Integer Semidefinite Programming for the Quadratic Minimum Spanning Tree Problem

In the quadratic minimum spanning tree problem (QMSTP), one wants to find the minimizer of a quadratic function over all possible spanning trees of a graph. In this talk we show how to formulate the QMSTP as mixed-integer semidefinite programs exploiting the algebraic connectivity of a graph. Based on these formulations, we derive a doubly nonnegative relaxation (DNN) for the QMSTP and present valid inequalities to strengthen the relaxation using the Chvátal-Gomory procedure for mixed-integer conic programming. We present a version of the Peaceman-Rachford splitting method, which allows us to solve the DNN relaxation and thereby obtain strong bounds on the QMSTP.

Lecture 2: The Edge Expansion of a Graph: Solution Methods and a Completely Positive Reformulation

The edge expansion problem is a famously hard combinatorial problem. It can be formulated as an optimization problem where a fractional objective function is minimized subject to linear inequalities and binary variables. We present approaches using semidefinite programming (SDP) to tackle this fractional optimization problem. One solution method is to split the problem into bisection problems and solve these using a branch-and-bound algorithm, another one is to use Dinkelbach's algorithm for fractional programming. Furthermore, we give a formulation as completely positive program that gives rise to a doubly nonnegative relaxation providing strong bounds.


LNMB / NGB Seminar

TNT Express: Supply Chain-Wide Optimization at TNT

The application of operations research (OR) at TNT Express (now part of FedEx) has greatly improved decision-making quality, resulting in substantial cost savings and reduced carbon emissions. The Global Optimization (GO) program was launched to integrate decision support into every level of TNT's operations. This initiative produced a suite of optimization tools that empowered TNT Express teams to enhance package delivery efficiency across both road and air networks. To facilitate the development and deployment of these solutions, we established Communities of Practice (CoPs), where internal and external experts gather at internal conferences to share insights and collaborate. Additionally, we launched the GO Academy, an innovative two-year training program that equips TNT Express employees with in-depth knowledge of optimization principles and their practical applications. Together, these initiatives have embedded OR as a core value within TNT Express, driving ongoing efficiency, innovation, and sustainability.

Netherlands Railways: The New Dutch Timetable: The O.R. Revolution

Netherlands Railways (NS) won the Franz Edelman Award in 2008. In this talk, we look back at this achievement but also discuss the use of Operations Research (OR) methods after 2008. NS received the Franz Edelman Award for the introduction of a completely new timetable in December 2006. Its objective was to facilitate the growth of passenger and freight transport on a highly utilized railway network and improve the robustness of the timetable, thus resulting in fewer operational train delays. Modifications to the existing timetable, which was constructed in 1970, were not an option; additional growth would require significant investments in the rail infrastructure. Constructing a railway timetable from scratch for about 5,500 daily trains was a complex problem. To support this process, we generated several timetables using sophisticated operations research techniques. Furthermore, because rolling stock and crew costs are principal components of the costs of a passenger railway operator, we used innovative OR tools to devise efficient schedules for these two resources. The new resource schedules and the increased number of passengers resulted in an additional annual profit. However, the benefits of the new timetable for the Dutch society were much greater: More trains are transporting more passengers on the same railway infrastructure, and these trains are arriving and departing on schedule more than they ever have in the past.

Dutch Delta Programme Commissioner: Economically Efficient Flood Standards to Protect the Netherlands Against Flooding

In the Netherlands, flood protection is a matter of national survival. In 2008, the Second Delta Committee recommended increasing legal flood protection standards at least tenfold to compensate for population and economic growth since 1953; this recommendation would have required dike improvement investments estimated at 11.5 billion euro. Our research group was charged with developing efficient flood protection standards in a more objective way. We used cost-benefit analysis and mixed-integer nonlinear optimization to demonstrate the efficiency of increasing the legal standards in three critical regions only. Monte Carlo analysis confirms the robustness of this recommendation. In 2012, the state secretary of the Ministry of Infrastructure and the Environment accepted our results as a basis for legislation. Compared to the earlier recommendation, this successful application of operations research yields both a highly significant increase in protection for these regions (in which two-thirds of the benefits of the proposed improvements accrue) and approximately 7.8 billion euro in cost savings. The new safety standards were accepted by the House of Parliament, and stated in the law, since 2017. This project has won the Franz Edelman Award in 2013. In this talk we will discuss the reason for this project, the optimization methodology that we developed, the results we obtained, and the current impact.


PhD presentations

Monday, 13 January 2025

Room: Steyl

Agnieszka Janicka, Eindhoven University of Technology

Title: Impact of Scale-free-tailed Traffic Volumes on Highway Congestion: A Probabilistic Perspective

Abstract: Traffic congestion continues to escalate with urbanization and socio-economic development, necessitating advanced modeling to understand and mitigate its impacts. In large-scale networks, this problem can be studied through the lens of cascading failure models, where congestion not only impacts isolated segments but also propagates through the network in a domino-like fashion. A standard metric for understanding these impacts is congestion cost, quantifying the additional travel time caused by traffic jams. Recent data suggest congestion cost exhibits a universal scale-free-tailed behavior, as similar scaling parameters were discovered across different locations. However, the mechanism driving this phenomenon is not yet well understood. To address this gap, we propose a stochastic cascading failure model of traffic congestion. Our main result shows that traffic congestion cost is driven by the scale-free-tailed distribution of traffic volumes. This arises as a consequence of the Single Big Jump Principle, implying that severe, country-wide congestion is likely caused by disproportionately large traffic originating from a single large city. We also show that the scale-free nature of congestion cost is robust with respect to various congestion propagation rules, providing an explanation for the universal scaling observed in empirical data. These findings bring a completely new perspective to understanding the fundamental drivers of traffic congestion and offer a unifying framework for studying congestion phenomena across diverse traffic networks.

Christian Franssen, Vrije Universiteit Amsterdam

Title: A Swiss-army-knife regularizer for pruning of neural networks

Abstract: Pruning encompasses a range of techniques aimed at increasing the sparsity of neural networks (NNs). These techniques can generally be framed as minimizing a loss function subject to an L0-norm constraint. In this paper, we introduce CoNNect, a novel differentiable regularizer for sparse NN training, inspired by Katz centrality, which measures connectivity in weighted graphs. Unlike L1-regularization, which is often used as a surrogate for L0-norm regularization, CoNNect ensures that neural networks maintain connectivity between the input and output layers throughout training. We prove that CoNNect effectively approximates L0-regularization and guarantees maximally connected network structures as stable stationary points, avoiding issues such as layer collapse. Our theoretical and numerical results demonstrate that CoNNect outperforms L1-norm regularization. Moreover, we show that CoNNect is applicable to both unstructured and structured pruning, and further validate its scalability and effectiveness through improved one-shot pruning performance in large language models.

Maike de Jongh, University of Twente

Title: Controlling the Ising model using spatiotemporal Markov decision theory

Abstract: Dynamics that are governed by both spatial and temporal interaction structures occur in a wide range of disciplines, such as economics, ecology, logistics and healthcare. In many situations, it is of interest to control such spatiotemporal processes and to steer them towards desirable behaviours. In this talk, we introduce the spatiotemporal Markov decision process (STMDP), an extension of the classic Markov decision process that provides a framework for sequential decision making in spatiotemporal stochastic settings. We illustrate the framework by applying it to a dynamic version of the Ising model on a two-dimensional lattice. At low temperatures and under a small positive magnetic field, this model is known to transition from the all-minus configuration to the all-plus configuration through the formation of a droplet of +-spins that will eventually nucleate the entire lattice. Our aim is to speed up this nucleation process by means of external influences and find the most efficient strategy to drive the process towards the all-plus configuration. After casting this problem as an STMDP, we provide insights in the structure and performance of the optimal policy.

Room: China

Berend Markhorst, CWI

Title: A two-step warm start method using lazy cuts for solving large-scale stochastic mixed-integer problems

Abstract: Two-stage (mixed-)integer stochastic optimization problems with a large number of constraints become computationally challenging when the number of scenarios representing parameter uncertainties grows. Motivated by this, we propose the TULIP-algorithm (“Two-step warm start method Using Lazy Cuts for large-scale stochastic mixed-Integer Problems”), a two-step approach: first, we generate a reduced set of representative scenarios and solve a relaxed version of the problem. Then, we add the generated lazy cuts from this solution as constraints to the original problem with the full scenario set. We demonstrate the effectiveness of TULIP on two classical benchmark problems: the Stochastic Capacitated Vehicle Routing Problem and the Two-Stage Stochastic Steiner Forest Problem. Our results, based on extensive numerical experiments, show that TULIP yields significant computational gains compared to solving the problem directly with branch-and-cut. This makes it a promising tool for tackling large-scale two-stage (mixed-)integer stochastic optimization problems.

Changrui Liu, Delft University of Technology

Title: Towards the regret analysis of model predictive control with imperfect control inputs

Abstract: Implementing model predictive control (MPC) in practice faces many subtle but prevalent problems, including modeling errors, solver errors, and actuator faults. In essence, the real control input applied to the system always deviates from the ideal one derived in theory. In this work, we provide a general analysis pipeline to quantify the suboptimality of MPC due to imperfect control inputs in terms of dynamic regret. Moreover, we discuss the computability of the regret bound when the properties of the ideal MPC controller are unknown. Simulation of MPC for a piecewise affine system demonstrates the validity of the analysis method.

Alireza Shefaei, Delft University of Technology

Title: Nested approaches for multi-stage stochastic planning problems

Abstract: We present a solver that combines a nested primal-dual decomposition technique and convex relaxation approaches for tackling non-convex multi-stage stochastic programming problems. The approach addresses optimal long-term water supply infrastructure planning with feasibility constraints at operational timescales. We combine an outer primal decomposition of planning stages and inner dual ones of operating scenarios, with convexified non-anticipative constraints relaxed for scenarios.

Room: Congo

Bas Verseveldt, Tilburg University

Title: Robust pricing and stocking in a world of uncertain demand

Abstract: How can we make better decisions under uncertainty? In this talk, we will explore distribution-free bounds for key quantities in probability theory, assuming only the mean and dispersion of random variables. Using a novel yet elementary technique, we reduce the set of all candidate solutions to two-point distributions. We will consider a general dispersion measure that has variance, mean absolute deviation, and power moments as special cases. We apply the results to practical scenarios like robust newsvendor stocking and monopoly pricing, generalizing classic mean-variance models. This allows us to uncover how pricing and order decisions adapt to growing demand uncertainty, even when heavy-tailed distributions are allowed.

Tim van Eck, Tilburg University

Title: Robust competitive ratio for deterministic monopoly pricing

Abstract: We consider deterministic monopoly pricing when only partial market knowledge is available, specifically when the monopolist only knows summary statistics of the valuation distribution such as mean, dispersion, and maximum valuation. Employing distributionally robust optimization and max-min analysis, we use the competitive ratio (CR) to evaluate pricing performance and compare it with traditional expected revenue metrics. We provide a comprehensive solution to the minimization problem for CR by identifying the worst-case market scenario given the limited information. Additionally, we derive closed-form solutions for optimal pricing under various dispersion measures, including variance and other power moments. Our findings reveal that the worst-case market for CR aligns with that for expected revenue, which challenges the conventional belief that CR restricts the adversary’s ability to create extreme scenarios. Through novel proof techniques tailored to CR, we also show how dispersion and maximum valuation impact optimal deterministic pricing. Our results offer valuable insights for adjusting prices effectively and avoiding unrealistic worst-case scenarios, thus leading to more robust and resilient pricing strategies. In particular, setting a maximum valuation introduces a new high-pricing strategy and proves a key factor in unlocking the potential for more profitable (not overly conservative) pricing schemes.

Mette Wagenvoort, Erasmus University Rotterdam

Title: Policies for the generalised capacitated resupply problem

Abstract: We consider the Generalised Capacitated Resupply problem (GCRP) in which locations with a given capacity and demand rate should be resupplied by vehicles such that they do not run out of stock and the number of vehicles is minimised. Compared to related problems, we consider the scenario where the payload of the vehicles may not suffice to bring the stock level back to full capacity. Next to the GCRP, we consider three variants of the problem in which the capacities and demand rates and/or the resupply times are homogeneous. We present policies to solve the different variants of the problem and provide corresponding approximation guarantees.

Room: Ghana

Cigdem Karademir, Delft University of Technology

Title: Two-echelon multi-trip vehicle routing problems with synchronization under delays

Abstract: Meeting the increasing demand in cities has become challenging under the regulations limiting freight vehicles to improve mobility and reduce carbon footprints. By effectively coordinating with larger vehicles, new electric vehicle technologies can overcome restrictions in driving range and storage capacity. This study proposes integrated water- and land-based transportation systems using light electric freight vehicles and autonomous vessels as mobile depots to enhance the efficiency of city logistics. However, such systems introduce dependency between different vehicles at the transshipment locations and face potential delays that can impact the reliability of the entire service network. Dedicating storage at these locations might mitigate the impact of delays but limit the reliability to location and capacity decisions at the strategic level. On the other hand, public spaces such as parking spots or transportation stops can be used as on-demand transshipment points, enabling revisiting these decisions at the operational level. However, this flexibility requires spatio-temporal synchronization between interacting vehicles and might propagate the delays in the system. This study provides comparative analyses of the impact of delays on integrated service designs with and without storage, 2-echelon-S and 2-echelon-F, respectively. We model these systems as two-echelon multi-trip vehicle routing problems with synchronization, addressing delays through a two-stage stochastic program with simple recourse. Preliminary experiments on small networks show that 2-echelon-F is better at reducing expected lateness and offers greater flexibility in logistics re-organization to minimize delays' impact. To further analyze the reliability of such sustainable logistics solutions in cities, different delay sources will be studied.

Dawei Chen, Eindhoven University of Technology

Title: A reinforcement learning approach for the dynamic vehicle routing and scheduling problem

Abstract: The Dynamic Vehicle Routing and Scheduling Problem with Stochastic Customer Requests and Time-dependent, Stochastic Travel Times (DVRSP-scr-tstt) represents a complex challenge where vehicle fleets must dynamically adapt to stochastic service requests as well as time-dependent and stochastic travel times. This study proposes a novel approach that uses reinforcement learning (RL) to effectively manage this complexity. The reinforcement learning trains a deep neural network to facilitate real-time decision making. The RL approach makes the routing and scheduling decision for the DVRSP-scr-tstt, minimizing travel and waiting times and penalties for late arrivals and unserved customers. We compare the proposed RL approach against established benchmarks and heuristics, demonstrating its robustness in managing the intricacies of real-time vehicle routing and scheduling in various scenarios. Through a mixed-integer programming model, we mathematically describe the deterministic variant of the DVRSP-scr-tstt, setting the lower-bound benchmark in operational efficiency and customer satisfaction. This numerical study highlights the potential of the proposed RL approach in overcoming the logistical challenges posed by stochastic requests and stochastic travel conditions.

Steven Miltenburg, Vrije Universiteit Amsterdam

Title: Complexity of fixed order routing

Abstract: We consider the classic Vehicle Routing Problem with the additional property that all requests are ordered and the subtour of each server (or vehicle) must obey the fixed order. A scheduling version of this problem was introduced by Bosman et al. (2019). We study sev- eral metric spaces and objective functions and our results show that in some settings such a fixed order simplifies the problem, while in others it makes an easy problem become NP-hard. For general metrics, we show that c-capacitated VRP remains APX-hard in the fixed order setting for c = 3 and show that the well-known iterated tour partitioning algorithm yields a (2 − 1/c)-approximation. When all points are on the line, we show that the fixed order restriction makes VRP NP-hard to solve for minimizing total completion time or maximum completion time, in con- trast to standard VRP. We also sketch how to obtain a PTAS in these settings for general metrics.

Tuesday, 14 January 2025

Room: Steyl

Imko Marijnissen, Delft University of Technology

Title: Solving scheduling using constraint programming

Abstract: Constraint programming has seen considerable success in recent years; this success can be (partially) attributed to the introduction of expressive global constraints, specifically the cumulative constraint. These global constraints reason about substructures of the problem and can use complex reasoning to shrink the size of the search space. Propagating such constraints efficiently is thus crucial for solving a variety of scheduling problems. This talk will be about how to efficiently propagate this type of reasoning in a constraint programming solver and how to improve its efficiency by considering incrementality. Incrementality is a well-known technique but has not been applied in this specific context. Our results indicate that this can lead to significant benefits.

Kim van den Houten, Delft University of Technology

Title: Proactive and reactive constraint programming for stochastic project scheduling

Abstract: This study investigates scheduling strategies for the stochastic resource-constrained project scheduling problem with maximal time lags (SRCPSP/max)). Recent advances in Constraint Programming (CP) and Temporal Networks have re-invoked interest in evaluating the advantages and drawbacks of various proactive and reactive scheduling methods. First, we present a new, CP-based fully proactive method. Second, we show how a reactive approach can be constructed using an online rescheduling procedure. A third contribution is based on partial order schedules and uses Simple Temporal Networks with Uncertainty (STNUs). Our statistical analysis shows that the STNU-based algorithm performs best in terms of solution quality, while also showing good relative offline and online computation time.

Maarten Flippo, Delft University of Technology

Title: A Multi-Stage Proof Logging Framework to Certify the Correctness of CP Solvers

Abstract: Proof logging is used to increase trust in the optimality and unsatisfiability claims of solvers. However, to this date, no constraint programming solver can practically produce proofs without significantly impacting performance, which hinders mainstream adoption. We address this issue by introducing a novel proof generation framework, together with a CP proof format and proof checker. Our approach is to divide the proof generation into three steps. At runtime, we require the CP solver to only produce a proof sketch, which we call a scaffold. After the solving is done, our proof processor trims and expands the scaffold into a full CP proof, which is subsequently verified. Our framework is agnostic to the solver and the verification approach. Through MiniZinc benchmarks, we demonstrate that with our framework, the overhead of logging during solving is often less than 10%, significantly lower than other approaches, and that our proof processing step can reduce the overall size of the proof by orders of magnitude and by extension the proof checking time. Our results demonstrate that proof logging has the potential to become an integral part of the CP community.

Francesco Cordiano, Delft University of Technology

Title: Data-driven uncertainty discretization with probabilistic guarantees for chance-constrained optimal control

Abstract: Dealing with uncertainty is one of the major challenges in optimal control and decision-making. In this field, chance-constrained optimal control has received growing interest, due to the possibility of optimizing the expected system performance, while providing constraint satisfaction with the desired probability. However, solving such problems requires some approximations, and current solution methods are generally computationally expensive, or lead to conservative system performance. To address this problem, we propose a novel optimal control scheme that relies on a discretization of the uncertainty, allowing a trade-off between computational complexity and accuracy of the approximation. The proposed method is purely data-driven, thus not requiring specific assumptions on the distribution, and it is endowed with probabilistic feasibility and performance guarantees. We demonstrate the effectiveness of the approach with a numerical example.

Room: China

Sina Shahri Majarshin, Eindhoven University of Technology

Title: A data-driven approach to a problem of optimal replacement in maintenance

Abstract: In our work, we present a method for optimizing maintenance strategies in machines with a key component, focusing on identifying the optimal replacement time within a finite horizon, which represents the machine's expected lifespan. Accurately determining the replacement time is challenging due to uncertainties in component degradation and variable operating conditions. To address this, we employ Robust Markov Decision Processes (RMDP) to optimize replacement decisions, allowing us to model degradation and account for uncertainty systematically. We incorporate ambiguity regions, including likelihood-based and KL-based regions, to capture and quantify the uncertainty in the degradation process. By leveraging uncertainty modelling, our approach minimizes overall maintenance costs while ensuring the component’s performance. Through experimentation, we validate the robustness and effectiveness of the RMDP framework. Our findings emphasize the importance of integrating robust decision-making with structured uncertainty modelling to enhance maintenance optimization. This research offers practical insights for improving maintenance strategies in uncertain environments, contributing to better reliability and performance in real-world applications.

Poulad Moradi Shahmansouri, University of Luxembourg

Title: Efficient asymptotics for condition-based replacement thresholds

Abstract: Condition-based maintenance aims to proactively plan actions to prevent equipment failure while considering economic implications. In this study, we address the problem of a component that undergoes degradation with independent stationary increments. Failure occurs when the degradation exceeds a threshold, and the degradation is periodically inspected. During each inspection, a decision must be made whether to replace the component, and an estimate of the remaining useful life can be made for spare part inventory planning. Although this problem has been extensively studied, computing optimal replacement policies typically requires dynamic programming, which poses a challenge for practical implementation. To address these issues, we conduct an asymptotic analysis of optimal replacement thresholds and cost rates, leading to the development of an efficient replacement policy with an asymptotic optimality guarantee. Our extensive numerical experiments demonstrate that the average optimality gap of our asymptotic optimal replacement policy is at most 0.03% when degradation increments follow a discrete distribution, and it is indistinguishable from zero in almost all instances when the distribution is continuous. Notably, the asymptotic optimal replacement threshold outperforms the exact algorithm in terms of computational time by a factor of at least 1000. Furthermore, we are extending our analysis to situations where the parameters of the degradation increment distribution are unknown, requiring a Bayesian learning approach.

Matthew Maat, University of Twente

Title: Running in circles: Cycle patterns and how algorithms use them for games

Abstract: Parity games, mean payoff games and energy games are examples of games that are played on the vertices of a directed graph. The problem of finding optimal strategies or values for these games is a well-studied topic, with countless algorithms being proposed. It is interesting from a complexity-theoretic viewpoint, as it is one of the few problems in both NP and coNP for which no polynomial-time algorithm is known.

We introduce the notion of 'cycle pattern' to shed some light on the underlying structure of these games. We characterize which cycle patterns can be realized in a weighted graph. We show some hardness results related to cycle patterns and to computing the winner of a game using only cycle patterns. We also show some bounds on the maximum required size of weights in the graph, and what this implies for algorithms that solve mean payoff games.

This is joint work with Georg Loho and Mateusz Skomra.

Yang Li, University of Twente

Title: Analysis of the effects of competition in business-to-peer bike sharing markets

Abstract: In this presentation, we propose a two-stage game theoretic model to investigate the effects of competition in a Business-to-Peer (B2P) sharing market, such as bike sharing in metropolitan areas. In the first stage, firms make decisions on price and volume to maximize profits. In the second stage, time-sensitive consumers engage in a nonatomic game to maximize their individual utility by deciding to either use or not use the service. An innovative feature of our model is to additionally consider non-compliant behavior of consumers (such as theft and vandalism). This market is referred to as an imperfect as opposed to a perfect market. We analyze firm profits, consumer surplus and total social welfare as the market shifts from monopoly to duopoly. Our findings show that, in a perfect market, competition harms the firms' profits and moreover, makes it more difficult for firms to operate profitably, because this requires consumers to be more patient. Yet competition can increase consumer surplus by transferring a fraction of the firms' profits to consumers. Total social welfare remains the same. While this aligns well with classical theory and empirical findings in European cities, our mail result concerns the imperfect market: There, competition helps to inhibit non-compliant behavior, and as a result, it increases both consumer surplus and total social welfare.

Room: Congo

Maaike Elgersma, Delft University of Technology

Title: The art of energy system modeling: finding tight and compact models

Abstract: To achieve climate goals by 2050, accurate energy system optimization (MIP) models are needed to help decision-makers make investment plans. To increase accuracy, a high resolution in the temporal and spatial dimensions is needed, as well as many details on the operational capabilities of energy generators. However, this results in large-scale models, of which the optimal solution cannot be obtained within any meaningful computing time, not even by supercomputers using the best possible solvers. Thus, researchers often seek the right trade-off between computational tractability and accuracy. Still, they forget that improving the existing model formulations in terms of tightness and compactness could already improve computation speed. The tightness of a formulation increases if the LP-relaxation is closer to the convex hull of the MIP model. The compactness of a formulation is determined by its (relative) number of constraints, variables, and nonzero elements in the constraint matrix. In my talk, I want to share different ways to obtain and prove tight and compact MIP models to improve the computational tractability of large scale optimization problems.

Zhi Gao, Utrecht University

Title: Dantzig-Wolfe decomposition of temporal resolutions in energy system optimization

Abstract: Sufficient temporal resolutions are essential for ensuring high-quality solutions for energy system optimization problems. Yet, the numerous time-coupled decision variables and constraints often increase the problem’s complexity significantly. To address this challenge, various decomposition techniques have been explored. Many of these techniques, however, fail to enhance computational performance as they introduce distortions to the optimization problem and require a large number of iterations. In this work, we propose applying Dantzig-Wolfe decomposition on the temporal resolution configuration of energy system optimization problems, in a way that needs limited iterations and always maintains the solution feasibility. We expect our method to fit the necessary temporal resolution and produce solutions accurately and more efficiently than solving the original problem.

Ties Schalij, Tilburg University

Title: Improving any heuristic via policy iteration

Abstract: Many Combinatorial Optimization problems can be formulated as a Markov Decision Process, where a heuristic for said problem is a policy. Policy Iteration guarantees an improved policy, and thus heuristic, but suffers from computational problems. In this research we approximate infinitely sized state spaces by finite state spaces, allowing the Policy Iteration algorithm to become feasible. Additionally, we establish that the performance loss incurred through this approximation is bounded. Our findings are illustrated through the Knapsack Problem, utilizing a greedy heuristic as a case study, with empirical results demonstrating the effectiveness of our approach.

Polle Dankers, Tilburg University

Title: Mapping post-disaster infrastructure using GPS data from relief vehicles

Abstract: After a disaster strikes, relief goods need to be delivered to beneficiaries as fast and efficient as possible. However, a disaster may destroy a part of the existing infrastructure. Knowledge about the state of infrastructure is essential to aid organizations for efficient routing. To map the state of the infrastructure, we estimate driving times on edges using GPS trajectories from relief vehicles in three ways:

  • Edges driven more than alpha times: only the GPS data of vehicles on those specific edges are used.
  • Edges driven more than beta, but less than alpha times: next to the GPS data of the specific edges, we use a measure on the likelihood of the edge being deliberately avoided after the disaster. We check expected flows given optimized routes based on the pre-disaster infrastructure, and compare these with driven flows. If the edge is likely avoided, we estimate the proportion of delay incurred on the edge, assuming that the pre-disaster optimized routes would be chosen if not damaged.
  • Edges driven less than beta times are only predicted with avoidance estimation. Using an iterative procedure, we aim to pinpoint delays on specific edges. Undriven edges which are not in pre-optimized routes are labeled unpredictable.

The approach can be used in a static setting, predicting the state of infrastructure only once. It can be extended to a dynamic version where the predictions from the prior time period are used as infrastructure during the next period’s trajectory generation.