On the uselessness of Supercomputers for solving discrete optimization problems by means of some “theoretically efficient” algorithms Conference attendances
Language | Английский | ||
---|---|---|---|
Participant type | Секционный | ||
URL | https://motor24.oscsbras.ru/programme/Technical_Program_MOTOR2024_v3.pdf | ||
Conference |
XXIII International Conference Mathematical Optimization Theory and Operations Research 30 Jun - 6 Jul 2024 , Омск |
||
Authors |
|
||
Affiliations |
|
Abstract:
Which of well-known “theoretically efficient” algorithms (with guaranteed accuracy bounds and “polynomial” running time) designed for well-known discrete optimization problems are really efficient and suitable for solving practical examples of those problems? What should be a test for checking the real efficiency of an algorithm? Should it be performed on a modern Supercomputer? On a global (world-wide) System of Supercomputers? On a Universe-wide System of such Supercomputers? We will try to find answers to these questions by testing (and comparing) two well-known theoretical approximation algorithms designed for solving one classical problem of scheduling theory - the Job Shop problem to the minimum of schedule length. We took for testing one of the two algorithms (the ``most advanced’’ one, providing a (2+e)-approximation for any e>0) and quite a small problem instance with 10 machines and 10 operations per job. We have also made several agreements, as to: A. That our small instance is being solved in a System, consisting of parallel SUPERCOMPUTERS (“SC”, for short), each SC having the maximal (known for today) capacity of $2\cdot 10^{18}$ flops. B. That there are 10 billion such SUPERCOMPUTERS in our System on the Globe, i.e., about one “personal Supercomputer” for every inhabitant of the Earth. (By the way, ONE such modern SC occupies an area of > 1000 m^2, which results in total > 10.000.000 km^2.) C. That in each of 10 trillion galaxies (in the visible part of the Universe) there are 100 billion stars, and each star has 10 planets, on each of which we have placed an SC-system similar to that of the Earth. Totally, 10^{25} Globe SC-Systems, forming the Computer System of the Universe (CSU, for short). D. That we have the opportunity to parallelize our problem, equally dividing the load between all computers of the CSU, and that each SC can work as long and continuously as desired. E. That we measure the working time of our CSU in time units “AU” (Age of the Universe). I took the value of this amount to be 15 billion years. Finally, we decided not to bother our System with finding a (2+e)-approximate solution for an arbitrarily small e, but took e=100 (allowing the System to find a solution 100 times worse than the optimal one). What do you think, how long it will take our CSU to find such a solution? The results of our estimation may surprise experienced discrete optimization people and make them think about questions like: “What the Supercomputers are needed for?” (or “How and where should they be used, in fact?”), “Which “theoretically efficient” methods should be treated as really effective?”, and “How should they be tested for actual effectiveness and practicality?”
Cite:
Sevastyanov S.
On the uselessness of Supercomputers for solving discrete optimization problems by means of some “theoretically efficient” algorithms
XXIII International Conference Mathematical Optimization Theory and Operations Research 30 Jun - 6 Jul 2024
On the uselessness of Supercomputers for solving discrete optimization problems by means of some “theoretically efficient” algorithms
XXIII International Conference Mathematical Optimization Theory and Operations Research 30 Jun - 6 Jul 2024