AIResearch AIResearch
Back to articles
Data

AI Speeds Up Complex Delivery Route Planning

A new parallel computing method cuts solution times for vehicle routing problems by up to 8 times while maintaining solution quality, enabling faster logistics optimization from hundreds to millions of customers.

AI Research
March 26, 2026
4 min read
AI Speeds Up Complex Delivery Route Planning

A new parallel computing approach can significantly accelerate the solution of complex delivery route planning problems, a critical task in logistics and transportation. Researchers have developed FILO2x, a that uses multiple solvers working together asynchronously to optimize shared solutions for the Capacitated Vehicle Routing Problem (CVRP), which involves finding the least-cost routes for vehicles with limited capacity to serve customers. This advancement allows for solving instances ranging from hundreds to over a million customers in a fraction of the time compared to previous sequential s, without compromising on solution quality.

The key finding is that FILO2x achieves near-linear speedups by leveraging parallel processing to handle the most time-consuming parts of the optimization. The researchers tested on three datasets: X instances (100 to 1,000 customers), B instances (up to 30,000 customers), and I instances (20,000 to 1,000,000 customers). For standard runs, FILO2x with 10 solvers reduced computing times by approximately 6 times on X instances, 6 times on B instances, and 6.4 times on I instances, while maintaining similar solution gaps to the best-known solutions. For example, on X instances, the average gap was 0.18% for FILO2x versus 0.17% for the sequential FILO2, and on I instances, it was 0.70% versus 0.70%.

Ology builds on the FILO2 algorithm, a sequential metaheuristic designed for large-scale CVRP instances. FILO2x introduces a parallel schema where multiple FILO2-like solvers cooperatively optimize a shared solution without explicit decomposition. Each solver runs independently, performing ruin, recreate, and local search steps to generate candidate changes, which are sent to a central dispatcher and broadcast to all solvers. Synchronization is minimized through message-passing communication and data redundancy, ensuring all solvers follow the same search trajectory. The approach uses a shared-memory architecture with thread-safe queues to manage changes, and solvers validate received changes for feasibility before applying them to avoid infeasible solutions.

Analysis shows that FILO2x effectively balances speed and quality. On X instances, using up to 5 solvers provided almost linear speedups, with computing times converging to around 10 seconds for short runs and 100 seconds for long runs with more solvers. Statistical tests, such as the Wilcoxon signed rank test with a Bonferroni-corrected significance level of 0.002778, confirmed no significant difference in solution quality between FILO2x and FILO2 across all datasets. For instance, on B instances, long runs with FILO2x achieved an average gap of 0.36% compared to 0.36% for FILO2, and on I instances, it was 0.29% versus 0.29%. The speedup was more pronounced for instances with more routes, as fewer routes increased the likelihood of generating infeasible changes due to overlapping optimizations.

Of this research are substantial for real-world applications in logistics, where faster route optimization can lead to reduced fuel costs, lower emissions, and improved delivery efficiency. By cutting solution times from minutes to seconds for large-scale problems, FILO2x enables more dynamic and responsive planning, such as real-time adjustments for traffic or demand changes. This is particularly relevant for e-commerce and freight transportation, where companies manage thousands of delivery points. 's scalability to extremely large instances, like those with a million customers, opens doors for optimizing national or regional distribution networks that were previously computationally prohibitive.

However, the approach has limitations. The speedup is sub-linear for the core optimization procedure, especially with many solvers, due to factors like CPU cache contention and synchronization overhead. Additionally, the probability of generating infeasible changes increases with the number of solvers and decreases with the number of routes; for example, on X instances with 10 solvers, the median infeasibility frequency reached 20%, peaking higher for instances with few routes like X-n157-k13. The route minimization procedure could not be effectively parallelized without degrading solution quality, so it remains sequential. Future work could address these issues by refining change validation or exploring hybrid parallelization strategies.

Original Source

Read the complete research paper

View on arXiv

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn