Newer
Older
Sparse method for large-scale problems [wikipedia](https://en.wikipedia.org/wiki/Revised_simplex_method)
* Python: [scipy simplex](https://github.com/scipy/scipy/blob/main/scipy/optimize/_linprog_rs.py) to read
**WARNING** check the sparse algorithm specifically, `_linprog_rs.py`
* Numerical Recipes: simplex algorithm walkthrough
[paper](http://numerical.recipes/webnotes/nr3web12.pdf), and
[local version](resources/nr3web12.pdf)
* Parallel search paths for simplex algorithm: howto parallelize
[local version](resources/s10100-016-0452-9.pdf)
* Linear Optimization with CUDA: simplex algorithm
[local version](resources/Simplex-ref1.pdf)
* github [cuda-revised-simplex](https://github.com/SethosII/cuda-revised-simplex)
* github [simplex-gpu](https://github.com/golvok/simplex-gpu)
### Implementation
* Python scipy sparse solver, read
* use:
[cuSparse](https://docs.nvidia.com/cuda/cusparse/index.html) library
#### Ignore edge cases: focus on the solver
Edge cases like infinite solution, unfeasible...
#### Optimize
* fuse cuda instructions together
* tune: blocksize (caching behaviour: fine-tune foe euler env specifically)
* read papers of other algorithms _if enough time_
### Benchmark
well-known [problems](https://plato.asu.edu/ftp/lpopt.html) we should use as benchmarks
HOWTO:
* `cd miplib`
- `cd scipoptsuite-8.0.4`
- `mkdir build && cd build && cmake .. && make -j8 && make -j8 test`
* `make checker`, using the local `checker/gmp` (already hardcoded in makefile)
* `make -j8 test TEST=testthetest` or any other file which contains a list of existing `.mps.gz`s
#### Smallest instances sorted by byte size
```
797 instances/miplib2017/revised-submissions/miplib2010_publically_available/instances/markshare_4_0.mps.gz
2179 instances/miplib2017/revised-submissions/miplib2010_publically_available/instances/markshare2.mps.gz
3872 instances/miplib2017/revised-submissions/miplib2010_publically_available/instances/enlight_hard.mps.gz
4005 instances/miplib2017/revised-submissions/miplib2010_publically_available/instances/pk1.mps.gz
5686 instances/miplib2017/revised-submissions/miplib2010_publically_available/instances/neos5.mps.gz
5884 instances/miplib2017/revised-submissions/Simon_Bowly/instances/gen-ip054.mps.gz
6320 instances/miplib2017/revised-submissions/miplib2010_publically_available/instances/neos859080.mps.gz
7077 instances/miplib2017/revised-submissions/miplib2010_timtab1/instances/timtab1.mps.gz
8706 instances/miplib2017/revised-submissions/Jeff_Linderoth2/instances/neos-3046615-murg.mps.gz
```
Most libraries already have a dual solver: use that if available!
#### HiGHS
HiGHS is a high performance linear programming algorithm.
It uses revised simplex, interior point method and the sparsity of the problem.
It will replace the simplex, revised simplex and interior point method in scipy from 1.11 onwords.
Website: https://highs.dev/
Opensource code: https://github.com/ERGO-Code/HiGHS
Scipy link: https://github.com/scipy/scipy/blob/main/scipy/optimize/_linprog_highs.py
Ssh access, add these lines to .ssh/config
```
Host davinci
Hostname davinci.inf.ethz.ch
User "dphpc_simplex2_1"
IdentityFile /home/felix/.ssh/id_rsa
```
and run `ssh davinci`.
[Getting started](https://scicomp.ethz.ch/wiki/Tutorials#Cluster_tutorials) and
[with GPUs](https://scicomp.ethz.ch/wiki/Getting_started_with_GPUs).
Simple implementation and benchmark baseline other implementations,
understand python scipy, draft of pseudocode description of algorithm
And check correctness between different implementations ?
Get Pipeline up: read data from file, ..., print some garbage output
like just the first guess.
Look at how to translate solver to **Dual** version, start writing CUDA...
### Presentation 22 Nov
in the übungsstunde
#### Explicit algorithm description
Something like pseudocode :
```
# A matrix
# c vector
A,c=load_problem_from_file()
return 0
```
finish: Design+CUDA+Testing+Omtimizing
start: Documentation and final report (max 6 pages)