Projects



A Paralell Computing on the CUDA GPU devices in the Linear Optimization problems



Abstract. We present a parallel implementation of the randomized (1 + ε)-approximation algorithm for packing and covering linear programs presented by Koufogiannakis and Young (2013). Their approach builds on the ideas of the sublinear time algorithm of Grigoriadis and Khachiyan's (1995) and Garg and Könemann's non-uniform-increment amortization scheme (2007), and with high probability it computes feasible primal and dual solutions whose costs are within a factor of 1+ε of the optimal cost. In order to make their algorithm more parallelizable we also implemented a deterministic version of the algorithm, i.e. instead of updating a single random entry at each iteration we updated deterministically many entries at once. This slowed down a single iteration of the algorithm but allowed for larger step-sizes which lead to fewer iterations. We use NVIDIA's parallel computing architecture CUDA for the parallel environment. We report a speedup over the times reported by Koufogiannakis and Young (2013) between one and two orders of magnitude.


Examples of matrices

Further examples can be obtained on the request to the e-mail: sjelic@mathos.hr