TransWikia.com

How to simulate computational execution time?

Operations Research Asked by Matheus Diógenes Andrade on August 19, 2021

I am working with some computational experiments with an Integer Programming (IP) formulation over a well-known set of instances from the literature of my problem. And I would like to compare my formulation with another author formulation from the literature (making a benchmarking).

However, the machine configuration in which the author’s formulation was executed is different from mine. So, I thought, if I can find the equivalent time from the author’s machine to mine, then I can use this equivalent time on my computational experiments, e.g if one hour of execution on the author’s machine is equivalent to two hours of execution on mine machine and the author’s experiments were executed with one hour of the time limit, then I can run my experiments with a time limit of two hours and have more precise results to compare.

Hence, I would like to know if there is any way to calculate the equivalent time from two different machines (even if this time be approximated).

2 Answers

In support of other answers and suggestions that you just run the other algorithm on your hardware, I would argue that failing to match the published results exactly is not necessarily cause for concern. If the authors reran the same examples using their code on their hardware, there is a high likelihood that timings would be at least somewhat different, and a distinct possibility that (if not run to proven optimality) the incumbent objective value and/or best bound might be different at the same time limit originally used. Unless you get a wildly different result from their algorithm (infeasible when they have a feasible solution, a better solution than their proven optimum, ...), I would vote for side-by-side comparison on your hardware.

Answered by prubin on August 19, 2021

Short answer: you can't with any decent level of accuracy. The best you can do is ballpark comparisons.

There are so many factors that affect the outcome, that you can maybe get within 20% difference on an identical machine with a decent degree of confidence. I know this for a fact because we benchmark on many identical machines and results always vary.

Important things under your control

  • The machine itself

  • The operating conditions

  • The version of the kernel

  • The power supply

  • To have identical OS & software on both machines (including running daemons/services)

  • The network cables (assuming that's important)

  • The version of the compiler & libraries. This is crucial if using open source software.

Important things you can't control

  • The machine's hardware. This made both lists to point out that unless you have the factory settings of the reference machine, it's hard to reproduce. What's the motherboard? What's the exact CPU? Does it use single or dual socket? What's the RAM frequency? Does is use 2x16 GB RAM or 4x8 GB?

  • The operating conditions. This also made both lists. Was the reference machine in a cool room, especially if Turbo Boost was activated on multi-hour runs? What's the level of dust inside the box? Was someone else logged in at the same time?

  • The speed of the network, unless it's private and wired.

  • The version of the kernel. Yes this also made both lists (I said this is hard right?). Unless you clone the hard drives, even slight differences in installed software can cause the same Linux kernel to work differently on identical machines, or even not at all on one of them. The best way to resolve this is to use Windows.

  • Different pieces of software. Keep in mind that if you are comparing your method to another author's using your own code, you are comparing algorithms and implementations at the same time, in which case your error bounds can be massive. OR code is extremely implementation sensitive.

Answered by Nikos Kazazakis on August 19, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP